var/home/core/zuul-output/0000755000175000017500000000000015116026654014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015116031622015466 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002257600715116031613017703 0ustar rootrootDec 09 14:14:21 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 14:14:21 crc kubenswrapper[5116]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:14:21 crc kubenswrapper[5116]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 14:14:21 crc kubenswrapper[5116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:14:21 crc kubenswrapper[5116]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:14:21 crc kubenswrapper[5116]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 09 14:14:21 crc kubenswrapper[5116]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.595294 5116 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600105 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600150 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600155 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600159 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600163 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600167 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600172 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600176 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600180 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600183 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600187 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600190 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600193 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600197 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600200 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600203 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600207 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600210 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600214 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600217 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600220 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600224 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600228 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600232 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600235 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600238 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600241 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600245 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600248 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600251 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600255 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600261 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600267 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600274 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600278 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600282 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600286 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600289 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600293 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600299 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600302 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600306 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600310 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600313 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600317 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600320 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600325 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600328 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600332 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600337 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600341 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600346 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600349 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600353 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600358 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600362 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600365 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600369 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600372 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600376 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600379 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600387 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600391 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600394 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600397 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600401 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600404 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600407 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600410 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600413 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600416 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600420 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600423 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600426 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600429 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600434 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600438 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600441 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600446 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600449 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600453 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600457 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600460 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600463 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600466 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.600470 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601005 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601013 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601018 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601022 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601025 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601029 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601032 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601038 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601041 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601045 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601048 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601051 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601054 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601058 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601061 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601064 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601067 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601070 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601073 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601076 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601080 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601083 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601086 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601089 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601093 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601096 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601099 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601103 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601106 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601109 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601113 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601116 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601120 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601124 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601128 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601131 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601134 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601137 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601141 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601146 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601149 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601153 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601156 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601159 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601162 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601166 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601169 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601172 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601175 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601178 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601181 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601184 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601187 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601191 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601194 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601198 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601201 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601206 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601210 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601213 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601217 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601220 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601223 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601226 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601230 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601233 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601238 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601242 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601247 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601250 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601254 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601258 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601261 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601264 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601268 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601272 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601275 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601279 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601283 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601286 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601289 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601294 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601297 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601300 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601304 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.601307 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601751 5116 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601768 5116 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601777 5116 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601783 5116 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601790 5116 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601794 5116 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601800 5116 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601806 5116 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601810 5116 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601815 5116 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601819 5116 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601824 5116 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601831 5116 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601836 5116 flags.go:64] FLAG: --cgroup-root="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601840 5116 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601844 5116 flags.go:64] FLAG: --client-ca-file="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601848 5116 flags.go:64] FLAG: --cloud-config="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601851 5116 flags.go:64] FLAG: --cloud-provider="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601855 5116 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601860 5116 flags.go:64] FLAG: --cluster-domain="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601864 5116 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601868 5116 flags.go:64] FLAG: --config-dir="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601871 5116 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601876 5116 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601881 5116 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601885 5116 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601890 5116 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601893 5116 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601898 5116 flags.go:64] FLAG: --contention-profiling="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601902 5116 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601906 5116 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601910 5116 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601914 5116 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601919 5116 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601923 5116 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601927 5116 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601931 5116 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601936 5116 flags.go:64] FLAG: --enable-server="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601940 5116 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601945 5116 flags.go:64] FLAG: --event-burst="100" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601967 5116 flags.go:64] FLAG: --event-qps="50" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601972 5116 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601977 5116 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601980 5116 flags.go:64] FLAG: --eviction-hard="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601986 5116 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601991 5116 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601995 5116 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.601999 5116 flags.go:64] FLAG: --eviction-soft="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602003 5116 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602007 5116 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602010 5116 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602014 5116 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602018 5116 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602022 5116 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602025 5116 flags.go:64] FLAG: --feature-gates="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602030 5116 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602034 5116 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602039 5116 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602043 5116 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602047 5116 flags.go:64] FLAG: --healthz-port="10248" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602050 5116 flags.go:64] FLAG: --help="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602054 5116 flags.go:64] FLAG: --hostname-override="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602058 5116 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602062 5116 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602066 5116 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602070 5116 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602073 5116 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602077 5116 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602081 5116 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602084 5116 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602089 5116 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602093 5116 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602097 5116 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602213 5116 flags.go:64] FLAG: --kube-reserved="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602219 5116 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602222 5116 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602228 5116 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602232 5116 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602236 5116 flags.go:64] FLAG: --lock-file="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602239 5116 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602243 5116 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602248 5116 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602254 5116 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602258 5116 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602262 5116 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602266 5116 flags.go:64] FLAG: --logging-format="text" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602270 5116 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602274 5116 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602280 5116 flags.go:64] FLAG: --manifest-url="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602284 5116 flags.go:64] FLAG: --manifest-url-header="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602289 5116 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602293 5116 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602298 5116 flags.go:64] FLAG: --max-pods="110" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602302 5116 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602306 5116 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602310 5116 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602313 5116 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602318 5116 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602321 5116 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602326 5116 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602338 5116 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602342 5116 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602347 5116 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602352 5116 flags.go:64] FLAG: --pod-cidr="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602356 5116 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602363 5116 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602367 5116 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602371 5116 flags.go:64] FLAG: --pods-per-core="0" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602374 5116 flags.go:64] FLAG: --port="10250" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602381 5116 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602385 5116 flags.go:64] FLAG: --provider-id="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602388 5116 flags.go:64] FLAG: --qos-reserved="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602392 5116 flags.go:64] FLAG: --read-only-port="10255" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602396 5116 flags.go:64] FLAG: --register-node="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602400 5116 flags.go:64] FLAG: --register-schedulable="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602404 5116 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602411 5116 flags.go:64] FLAG: --registry-burst="10" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602415 5116 flags.go:64] FLAG: --registry-qps="5" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602418 5116 flags.go:64] FLAG: --reserved-cpus="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602422 5116 flags.go:64] FLAG: --reserved-memory="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602428 5116 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602432 5116 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602436 5116 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602440 5116 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602443 5116 flags.go:64] FLAG: --runonce="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602447 5116 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602451 5116 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602455 5116 flags.go:64] FLAG: --seccomp-default="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602459 5116 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602462 5116 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602466 5116 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602470 5116 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602474 5116 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602478 5116 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602482 5116 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602486 5116 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602497 5116 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602501 5116 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602505 5116 flags.go:64] FLAG: --system-cgroups="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602509 5116 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602516 5116 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602521 5116 flags.go:64] FLAG: --tls-cert-file="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602525 5116 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602529 5116 flags.go:64] FLAG: --tls-min-version="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602533 5116 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602537 5116 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602541 5116 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602544 5116 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602548 5116 flags.go:64] FLAG: --v="2" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602553 5116 flags.go:64] FLAG: --version="false" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602558 5116 flags.go:64] FLAG: --vmodule="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602564 5116 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.602569 5116 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602673 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602678 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602681 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602685 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602688 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602691 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602695 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602698 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602702 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602707 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602711 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602715 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602720 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602724 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602728 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602732 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602736 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602740 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602743 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602746 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602751 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602755 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602758 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602761 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602765 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602768 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602771 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602774 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602777 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602780 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602784 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602790 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602794 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602798 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602802 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602805 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602808 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602812 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602815 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602819 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602822 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602825 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602828 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602832 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602835 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602838 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602841 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602844 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602848 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602851 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602855 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602858 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602862 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602866 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602869 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602873 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602876 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602879 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602882 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602886 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602889 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602892 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602895 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602900 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602903 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602906 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602910 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602913 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602916 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602919 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602922 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602926 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602929 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602932 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602935 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602938 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602941 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602945 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602948 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602970 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602973 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602977 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.602982 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.603003 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.603011 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.603015 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.603182 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.613488 5116 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.613521 5116 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613589 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613598 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613603 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613607 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613612 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613617 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613621 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613625 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613629 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613634 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613638 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613642 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613646 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613650 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613654 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613659 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613663 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613667 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613671 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613675 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613679 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613684 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613688 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613693 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613697 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613703 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613708 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613713 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613717 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613721 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613725 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613730 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613734 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613738 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613742 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613747 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613751 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613755 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613759 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613764 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613768 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613772 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613776 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613781 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613785 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613789 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613794 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613799 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613803 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613810 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613814 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613818 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613823 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613828 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613832 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613839 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613845 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613849 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613860 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613866 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613872 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613877 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613881 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613885 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613890 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613894 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613899 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613903 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613907 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613912 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613917 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613921 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613926 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613930 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613935 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613939 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613944 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613948 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613969 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613974 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613978 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613984 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613988 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613993 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.613998 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614003 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.614012 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614165 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614173 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614179 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614184 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614190 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614195 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614199 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614204 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614209 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614214 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614218 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614223 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614227 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614231 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614236 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614241 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614245 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614249 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614254 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614258 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614263 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614268 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614272 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614277 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614281 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614285 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614290 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614295 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614300 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614304 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614309 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614313 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614318 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614322 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614326 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614331 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614335 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614360 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614365 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614370 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614376 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614382 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614386 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614391 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614396 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614401 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614405 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614410 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614414 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614419 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614423 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614428 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614432 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614436 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614440 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614444 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614449 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614453 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614457 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614461 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614466 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614470 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614474 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614478 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614482 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614487 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614491 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614495 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614499 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614505 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614527 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614532 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614537 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614542 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614546 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614550 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614555 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614560 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614564 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614568 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614573 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614577 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614581 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614585 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614589 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:14:21 crc kubenswrapper[5116]: W1209 14:14:21.614593 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.614602 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.615077 5116 server.go:962] "Client rotation is on, will bootstrap in background" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.617557 5116 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.620385 5116 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.620497 5116 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.621007 5116 server.go:1019] "Starting client certificate rotation" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.621116 5116 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.621163 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.626261 5116 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.627986 5116 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.628154 5116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.636513 5116 log.go:25] "Validated CRI v1 runtime API" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.652704 5116 log.go:25] "Validated CRI v1 image API" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.654294 5116 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.655669 5116 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-09-14-07-56-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.655696 5116 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.668718 5116 manager.go:217] Machine: {Timestamp:2025-12-09 14:14:21.667499166 +0000 UTC m=+0.189243974 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:d07826d4-dd25-445c-8d20-f1545448b405 BootID:7322bcb2-516b-419d-9115-e535bc272977 Filesystems:[{Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:35:46:fa Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:35:46:fa Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:07:80:d3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4a:32:1d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:37:79:6a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a9:bb:de Speed:-1 Mtu:1496} {Name:eth10 MacAddress:be:fc:29:aa:8a:89 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:f5:9d:a1:db:2f Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.668920 5116 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.669078 5116 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.669881 5116 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.669913 5116 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.670142 5116 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.670152 5116 container_manager_linux.go:306] "Creating device plugin manager" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.670171 5116 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.670310 5116 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.670584 5116 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.670709 5116 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.671188 5116 kubelet.go:491] "Attempting to sync node with API server" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.671207 5116 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.671220 5116 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.671233 5116 kubelet.go:397] "Adding apiserver pod source" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.671262 5116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.672923 5116 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.672945 5116 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.673045 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.673102 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.673982 5116 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.674000 5116 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.675378 5116 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.675580 5116 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.675925 5116 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676261 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676281 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676289 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676296 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676303 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676316 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676328 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676336 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676344 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676356 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676366 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676508 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676697 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.676710 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.677669 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.687031 5116 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.687108 5116 server.go:1295] "Started kubelet" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.687283 5116 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.687265 5116 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.687401 5116 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.687839 5116 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 14:14:21 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.688804 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f91982a86d105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.687058693 +0000 UTC m=+0.208803491,LastTimestamp:2025-12-09 14:14:21.687058693 +0000 UTC m=+0.208803491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.689205 5116 server.go:317] "Adding debug handlers to kubelet server" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.689221 5116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.690184 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.690246 5116 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.690265 5116 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.690415 5116 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.690462 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.690518 5116 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.693748 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.694050 5116 factory.go:55] Registering systemd factory Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.694094 5116 factory.go:223] Registration of the systemd container factory successfully Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.696204 5116 factory.go:153] Registering CRI-O factory Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.696233 5116 factory.go:223] Registration of the crio container factory successfully Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.696308 5116 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.696338 5116 factory.go:103] Registering Raw factory Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.696355 5116 manager.go:1196] Started watching for new ooms in manager Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.696991 5116 manager.go:319] Starting recovery of all containers Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.711968 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712023 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712035 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712056 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712064 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712073 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712082 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712090 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712099 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712119 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712132 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712140 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712184 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712207 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712226 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712234 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712242 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712250 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712257 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712265 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712280 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712290 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712297 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712305 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712321 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712329 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712336 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712343 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712362 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712371 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712378 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712386 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712394 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712401 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712408 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712436 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712496 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712534 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712559 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712567 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712574 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712604 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712612 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712623 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712642 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712669 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712685 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.712692 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713641 5116 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713691 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713701 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713709 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713728 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713735 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713744 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713751 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713759 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713774 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713782 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713789 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713797 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713804 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713821 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713828 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713835 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713843 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713849 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713856 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713864 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713872 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713880 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713887 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713902 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713909 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713917 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713924 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713932 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713939 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713947 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713973 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713980 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713989 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.713996 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714003 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714010 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714017 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714024 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714031 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714039 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714046 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714053 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714060 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714072 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714079 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714086 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714093 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714100 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714107 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714115 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714122 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714403 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714418 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714427 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714517 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714527 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714539 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714548 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714559 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714568 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714576 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714587 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714596 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714606 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714639 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714648 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714659 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714668 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714679 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714687 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714695 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714705 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714713 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714723 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714732 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714742 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714750 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714759 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714769 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714777 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714787 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714795 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714805 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714814 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714822 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714833 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714840 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714851 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714859 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714867 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714877 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714885 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714896 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714922 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714933 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714942 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714967 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714989 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.714998 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715010 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715067 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715082 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715090 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715098 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715109 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715117 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715127 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715136 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715146 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715154 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715162 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715173 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715181 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715192 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715200 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715208 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715219 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715227 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715238 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715250 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715261 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715273 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715281 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715294 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715303 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715314 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715322 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715332 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715373 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715381 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715392 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715402 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715416 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715430 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715443 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715454 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715469 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715484 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715514 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715532 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715562 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715571 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715581 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715603 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715613 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715621 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715632 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715650 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715658 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715669 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715677 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715704 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715712 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715723 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715732 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715748 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715760 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715771 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715784 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715794 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715805 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715814 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715822 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715841 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715848 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715859 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715867 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715904 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715915 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715922 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715933 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715941 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715970 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715979 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715988 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.715998 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716129 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716147 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716159 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716168 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716176 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716186 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716194 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716208 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716217 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716227 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716304 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716315 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716326 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716335 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716387 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716395 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716421 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716431 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716439 5116 reconstruct.go:97] "Volume reconstruction finished" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.716447 5116 reconciler.go:26] "Reconciler: start to sync state" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.734631 5116 manager.go:324] Recovery completed Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.745334 5116 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.746584 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.747333 5116 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.747368 5116 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.747392 5116 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.747402 5116 kubelet.go:2451] "Starting kubelet main sync loop" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.747463 5116 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.748210 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.748854 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.749021 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.749083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.749577 5116 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.749590 5116 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.749618 5116 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.753263 5116 policy_none.go:49] "None policy: Start" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.753291 5116 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.753307 5116 state_mem.go:35] "Initializing new in-memory state store" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.790378 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.793353 5116 manager.go:341] "Starting Device Plugin manager" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.796153 5116 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.796231 5116 server.go:85] "Starting device plugin registration server" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.797078 5116 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.797110 5116 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.797525 5116 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.797674 5116 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.797692 5116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.801599 5116 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.801654 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.848180 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.848355 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.849444 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.849480 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.849492 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.850033 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.850185 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.850236 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.850374 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.850394 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.850404 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851141 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851235 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851259 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851407 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851432 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851442 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851605 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851634 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851647 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851725 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851757 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.851776 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.852492 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.852843 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.852933 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.853134 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.853169 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.853181 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.854125 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.854293 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.854318 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.854330 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.854330 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.854580 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855214 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855238 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855257 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855260 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855519 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855882 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.855908 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.856616 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.856648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.856661 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.888027 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.895039 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.898469 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.899502 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.899546 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.899556 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.899599 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.900029 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.914455 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918245 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918304 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918330 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918516 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918544 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918564 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918583 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918606 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918632 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918679 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918716 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918741 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918764 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918786 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918809 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919012 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.918833 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919347 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919420 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919456 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919478 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919505 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919550 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919575 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919730 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.919853 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.920006 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.920032 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.920573 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: I1209 14:14:21.920796 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.923694 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.943303 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:21 crc kubenswrapper[5116]: E1209 14:14:21.951171 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020681 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020764 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020802 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020832 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020855 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020858 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020878 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020900 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020938 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020969 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020998 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021026 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021048 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021080 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021091 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021100 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021120 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.020826 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021143 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021145 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021179 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021215 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021243 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021221 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021262 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021294 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021270 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021099 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021291 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021275 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021325 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.021331 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.100911 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.101791 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.101850 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.101869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.101906 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.102474 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.188472 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: W1209 14:14:22.211205 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-5cdf060858c6dded7a8b7c652177ac64666cf2da1cd5da375a17a69d2adf2582 WatchSource:0}: Error finding container 5cdf060858c6dded7a8b7c652177ac64666cf2da1cd5da375a17a69d2adf2582: Status 404 returned error can't find the container with id 5cdf060858c6dded7a8b7c652177ac64666cf2da1cd5da375a17a69d2adf2582 Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.214806 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.214811 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.226402 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: W1209 14:14:22.241665 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-f681687c40432979495b5a0d924a4fc2ac811a5ebe21cf5820bb06ea31ce9ac0 WatchSource:0}: Error finding container f681687c40432979495b5a0d924a4fc2ac811a5ebe21cf5820bb06ea31ce9ac0: Status 404 returned error can't find the container with id f681687c40432979495b5a0d924a4fc2ac811a5ebe21cf5820bb06ea31ce9ac0 Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.243468 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.251537 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:14:22 crc kubenswrapper[5116]: W1209 14:14:22.253693 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-03f87c64d28582abd3ffea63b2d477d65ccd33fdca9cc0560d2353ac673c79d0 WatchSource:0}: Error finding container 03f87c64d28582abd3ffea63b2d477d65ccd33fdca9cc0560d2353ac673c79d0: Status 404 returned error can't find the container with id 03f87c64d28582abd3ffea63b2d477d65ccd33fdca9cc0560d2353ac673c79d0 Dec 09 14:14:22 crc kubenswrapper[5116]: W1209 14:14:22.258628 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-7d1a9b9213856dc7af19f496d16ec2bc8cd4fa63494abd4713dac31d5a438023 WatchSource:0}: Error finding container 7d1a9b9213856dc7af19f496d16ec2bc8cd4fa63494abd4713dac31d5a438023: Status 404 returned error can't find the container with id 7d1a9b9213856dc7af19f496d16ec2bc8cd4fa63494abd4713dac31d5a438023 Dec 09 14:14:22 crc kubenswrapper[5116]: W1209 14:14:22.276407 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-956f5bd4547acdcd5ccc878ffc4fbc6816deffe5480c1ea60be801f9ff8e58e4 WatchSource:0}: Error finding container 956f5bd4547acdcd5ccc878ffc4fbc6816deffe5480c1ea60be801f9ff8e58e4: Status 404 returned error can't find the container with id 956f5bd4547acdcd5ccc878ffc4fbc6816deffe5480c1ea60be801f9ff8e58e4 Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.296426 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.502582 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.504027 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.504067 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.504078 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.504100 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.504528 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.678850 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.753777 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.753855 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"03f87c64d28582abd3ffea63b2d477d65ccd33fdca9cc0560d2353ac673c79d0"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.756573 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.756612 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f681687c40432979495b5a0d924a4fc2ac811a5ebe21cf5820bb06ea31ce9ac0"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.758987 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.759016 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5cdf060858c6dded7a8b7c652177ac64666cf2da1cd5da375a17a69d2adf2582"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.759139 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.759776 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.759798 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.759807 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.760010 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.761220 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.761261 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"956f5bd4547acdcd5ccc878ffc4fbc6816deffe5480c1ea60be801f9ff8e58e4"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.761433 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.762132 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.762163 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.762175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.762349 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.763759 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"7d1a9b9213856dc7af19f496d16ec2bc8cd4fa63494abd4713dac31d5a438023"} Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.763841 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.765010 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.765053 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:22 crc kubenswrapper[5116]: I1209 14:14:22.765067 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.765314 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.879448 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f91982a86d105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.687058693 +0000 UTC m=+0.208803491,LastTimestamp:2025-12-09 14:14:21.687058693 +0000 UTC m=+0.208803491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.898624 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:14:22 crc kubenswrapper[5116]: E1209 14:14:22.939733 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.004002 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.042396 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.097869 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.304616 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.305671 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.305770 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.305808 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.305904 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.307218 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.678637 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.688568 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.689565 5116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.767380 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873" exitCode=0 Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.767465 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.767672 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.768227 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.768266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.768278 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.768476 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.769716 5116 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb" exitCode=0 Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.769773 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.769858 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.771592 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.771629 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.771648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.771858 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.773660 5116 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba" exitCode=0 Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.773779 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.773887 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.774508 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.774542 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.774555 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.774852 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.777313 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.777347 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.777362 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.777390 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.778828 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.778859 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.778871 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.779080 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.779355 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851" exitCode=0 Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.779392 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851"} Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.779560 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.780409 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.780447 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.780459 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:23 crc kubenswrapper[5116]: E1209 14:14:23.780646 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.785026 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.785827 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.785876 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:23 crc kubenswrapper[5116]: I1209 14:14:23.785892 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.786617 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c" exitCode=0 Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.786721 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.786987 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.787899 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.787938 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.787965 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:24 crc kubenswrapper[5116]: E1209 14:14:24.788166 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.788240 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.788396 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.789154 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.789188 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.789199 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:24 crc kubenswrapper[5116]: E1209 14:14:24.789401 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.792680 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.792707 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.792722 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.792745 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.793153 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.793183 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.793195 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:24 crc kubenswrapper[5116]: E1209 14:14:24.793477 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.796744 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.796782 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.796796 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.796809 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596"} Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.796846 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.797334 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.797369 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.797381 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:24 crc kubenswrapper[5116]: E1209 14:14:24.797693 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.907677 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.908518 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.908552 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.908565 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:24 crc kubenswrapper[5116]: I1209 14:14:24.908592 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.805218 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a1bae9d6fcefae5165b4caff1061a927b864dd63dcee6ed1bdd091f0e4ac2d99"} Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.805492 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.806744 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.806822 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.806844 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:25 crc kubenswrapper[5116]: E1209 14:14:25.807229 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.810560 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba" exitCode=0 Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.810697 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba"} Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.810860 5116 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.810947 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.811064 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.812021 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.812087 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.812110 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.812309 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.812367 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:25 crc kubenswrapper[5116]: I1209 14:14:25.812397 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:25 crc kubenswrapper[5116]: E1209 14:14:25.812515 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:25 crc kubenswrapper[5116]: E1209 14:14:25.813200 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.815714 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c"} Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.815766 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960"} Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.815779 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95"} Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.815916 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.815977 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.816901 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.817016 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.817042 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:26 crc kubenswrapper[5116]: E1209 14:14:26.817586 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.954118 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.954753 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.955587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.955632 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.955647 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:26 crc kubenswrapper[5116]: E1209 14:14:26.955994 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:26 crc kubenswrapper[5116]: I1209 14:14:26.962287 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.719925 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.821102 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.821405 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.821517 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe"} Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.821542 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66"} Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.821626 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822014 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822043 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822060 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822044 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822217 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822228 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:27 crc kubenswrapper[5116]: E1209 14:14:27.822363 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:27 crc kubenswrapper[5116]: E1209 14:14:27.822439 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822015 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822702 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.822712 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:27 crc kubenswrapper[5116]: E1209 14:14:27.822854 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:27 crc kubenswrapper[5116]: I1209 14:14:27.901121 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.130635 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.824264 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.824364 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.825311 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.825367 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.825387 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.825399 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.825446 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:28 crc kubenswrapper[5116]: I1209 14:14:28.825466 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:28 crc kubenswrapper[5116]: E1209 14:14:28.826182 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:28 crc kubenswrapper[5116]: E1209 14:14:28.826812 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.806809 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.827898 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.827922 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.828875 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.828936 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.829004 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.828913 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.829079 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:29 crc kubenswrapper[5116]: I1209 14:14:29.829110 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:29 crc kubenswrapper[5116]: E1209 14:14:29.829766 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:29 crc kubenswrapper[5116]: E1209 14:14:29.830117 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:31 crc kubenswrapper[5116]: I1209 14:14:31.734770 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:31 crc kubenswrapper[5116]: I1209 14:14:31.735116 5116 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:14:31 crc kubenswrapper[5116]: I1209 14:14:31.735184 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:31 crc kubenswrapper[5116]: I1209 14:14:31.736604 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:31 crc kubenswrapper[5116]: I1209 14:14:31.736671 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:31 crc kubenswrapper[5116]: I1209 14:14:31.736684 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:31 crc kubenswrapper[5116]: E1209 14:14:31.737201 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:31 crc kubenswrapper[5116]: E1209 14:14:31.801937 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.085637 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.085917 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.086905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.087053 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.087078 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:32 crc kubenswrapper[5116]: E1209 14:14:32.087594 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.120279 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.120538 5116 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.120593 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.121678 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.121769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.121783 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:32 crc kubenswrapper[5116]: E1209 14:14:32.122298 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.126735 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.835231 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.836216 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.836259 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.836269 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:32 crc kubenswrapper[5116]: E1209 14:14:32.836619 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:32 crc kubenswrapper[5116]: I1209 14:14:32.943207 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:33 crc kubenswrapper[5116]: I1209 14:14:33.837426 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:33 crc kubenswrapper[5116]: I1209 14:14:33.837981 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:33 crc kubenswrapper[5116]: I1209 14:14:33.838018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:33 crc kubenswrapper[5116]: I1209 14:14:33.838030 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:33 crc kubenswrapper[5116]: E1209 14:14:33.838344 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:34 crc kubenswrapper[5116]: I1209 14:14:34.680173 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 14:14:34 crc kubenswrapper[5116]: E1209 14:14:34.698844 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 09 14:14:34 crc kubenswrapper[5116]: I1209 14:14:34.867629 5116 trace.go:236] Trace[297472261]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:14:24.866) (total time: 10001ms): Dec 09 14:14:34 crc kubenswrapper[5116]: Trace[297472261]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:14:34.867) Dec 09 14:14:34 crc kubenswrapper[5116]: Trace[297472261]: [10.001179699s] [10.001179699s] END Dec 09 14:14:34 crc kubenswrapper[5116]: E1209 14:14:34.867677 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:14:34 crc kubenswrapper[5116]: E1209 14:14:34.909737 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 09 14:14:34 crc kubenswrapper[5116]: I1209 14:14:34.956160 5116 trace.go:236] Trace[1576719138]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:14:24.954) (total time: 10001ms): Dec 09 14:14:34 crc kubenswrapper[5116]: Trace[1576719138]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:14:34.956) Dec 09 14:14:34 crc kubenswrapper[5116]: Trace[1576719138]: [10.00157932s] [10.00157932s] END Dec 09 14:14:34 crc kubenswrapper[5116]: E1209 14:14:34.956203 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:14:35 crc kubenswrapper[5116]: I1209 14:14:35.870213 5116 trace.go:236] Trace[1842176494]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:14:25.868) (total time: 10001ms): Dec 09 14:14:35 crc kubenswrapper[5116]: Trace[1842176494]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:14:35.870) Dec 09 14:14:35 crc kubenswrapper[5116]: Trace[1842176494]: [10.001411145s] [10.001411145s] END Dec 09 14:14:35 crc kubenswrapper[5116]: E1209 14:14:35.870263 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:14:35 crc kubenswrapper[5116]: I1209 14:14:35.944095 5116 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 09 14:14:35 crc kubenswrapper[5116]: I1209 14:14:35.944234 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 09 14:14:36 crc kubenswrapper[5116]: I1209 14:14:36.009918 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:14:36 crc kubenswrapper[5116]: I1209 14:14:36.010036 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:14:36 crc kubenswrapper[5116]: I1209 14:14:36.017456 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 09 14:14:36 crc kubenswrapper[5116]: I1209 14:14:36.017553 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.202550 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.202873 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.203617 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.203655 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.203667 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:37 crc kubenswrapper[5116]: E1209 14:14:37.204010 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.225565 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.857276 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.858438 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.858520 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.858542 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:37 crc kubenswrapper[5116]: E1209 14:14:37.859336 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:37 crc kubenswrapper[5116]: I1209 14:14:37.879514 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 14:14:37 crc kubenswrapper[5116]: E1209 14:14:37.903766 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.109922 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.111520 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.111659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.111696 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.111948 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:38 crc kubenswrapper[5116]: E1209 14:14:38.128915 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.140657 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.141349 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.142644 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.142923 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.143228 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:38 crc kubenswrapper[5116]: E1209 14:14:38.144181 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.148999 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.860064 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.860123 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.861175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.861236 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.861261 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.861323 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.861417 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:38 crc kubenswrapper[5116]: I1209 14:14:38.861450 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:38 crc kubenswrapper[5116]: E1209 14:14:38.862088 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:38 crc kubenswrapper[5116]: E1209 14:14:38.862501 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:39 crc kubenswrapper[5116]: E1209 14:14:39.215869 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:14:39 crc kubenswrapper[5116]: E1209 14:14:39.624104 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:14:40 crc kubenswrapper[5116]: E1209 14:14:40.254090 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.007258 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.007347 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982a86d105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.687058693 +0000 UTC m=+0.208803491,LastTimestamp:2025-12-09 14:14:21.687058693 +0000 UTC m=+0.208803491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.007984 5116 trace.go:236] Trace[734155615]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:14:26.138) (total time: 14869ms): Dec 09 14:14:41 crc kubenswrapper[5116]: Trace[734155615]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14869ms (14:14:41.007) Dec 09 14:14:41 crc kubenswrapper[5116]: Trace[734155615]: [14.869080186s] [14.869080186s] END Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.008026 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.012231 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.017741 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.020310 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.024719 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.033137 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f919831fa59eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.812070891 +0000 UTC m=+0.333815689,LastTimestamp:2025-12-09 14:14:21.812070891 +0000 UTC m=+0.333815689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.041537 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.849466091 +0000 UTC m=+0.371210889,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.046757 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.849486541 +0000 UTC m=+0.371231349,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.052045 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e39f36d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.849498101 +0000 UTC m=+0.371242899,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.057694 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.850387379 +0000 UTC m=+0.372132177,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.061995 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.85039983 +0000 UTC m=+0.372144628,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.069203 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e39f36d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.85040956 +0000 UTC m=+0.372154358,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.075375 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.85142027 +0000 UTC m=+0.373165068,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.081012 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.851438131 +0000 UTC m=+0.373182929,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.091073 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e39f36d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.851447471 +0000 UTC m=+0.373192269,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.096338 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.851626295 +0000 UTC m=+0.373371093,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.103711 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.851641075 +0000 UTC m=+0.373385873,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.109261 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e39f36d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.851652875 +0000 UTC m=+0.373397673,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.113514 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.851743647 +0000 UTC m=+0.373488445,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.116821 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.851766977 +0000 UTC m=+0.373511775,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.121058 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e39f36d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.851781318 +0000 UTC m=+0.373526116,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.147626 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.853151595 +0000 UTC m=+0.374896393,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.151899 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.853176766 +0000 UTC m=+0.374921564,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.170071 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e39f36d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e39f36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749130093 +0000 UTC m=+0.270874891,LastTimestamp:2025-12-09 14:14:21.853187606 +0000 UTC m=+0.374932404,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.176665 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e36c88d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e36c88d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.748922509 +0000 UTC m=+0.270667307,LastTimestamp:2025-12-09 14:14:21.854308569 +0000 UTC m=+0.376053367,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.181822 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f91982e3912ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f91982e3912ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:21.749072622 +0000 UTC m=+0.270817420,LastTimestamp:2025-12-09 14:14:21.854324729 +0000 UTC m=+0.376069537,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.186980 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f91984a019f83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.215200643 +0000 UTC m=+0.736945451,LastTimestamp:2025-12-09 14:14:22.215200643 +0000 UTC m=+0.736945451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.190411 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91984bbbf40e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.244189198 +0000 UTC m=+0.765934016,LastTimestamp:2025-12-09 14:14:22.244189198 +0000 UTC m=+0.765934016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.194558 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f91984c867de3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.257462755 +0000 UTC m=+0.779207583,LastTimestamp:2025-12-09 14:14:22.257462755 +0000 UTC m=+0.779207583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.198903 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f91984d965562 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.275278178 +0000 UTC m=+0.797022976,LastTimestamp:2025-12-09 14:14:22.275278178 +0000 UTC m=+0.797022976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.204788 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91984dc6e930 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.278461744 +0000 UTC m=+0.800206552,LastTimestamp:2025-12-09 14:14:22.278461744 +0000 UTC m=+0.800206552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.210523 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f919868e942f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.733697781 +0000 UTC m=+1.255442589,LastTimestamp:2025-12-09 14:14:22.733697781 +0000 UTC m=+1.255442589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.218211 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91986915abea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.736608234 +0000 UTC m=+1.258353042,LastTimestamp:2025-12-09 14:14:22.736608234 +0000 UTC m=+1.258353042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.223888 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919869171556 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.736700758 +0000 UTC m=+1.258445566,LastTimestamp:2025-12-09 14:14:22.736700758 +0000 UTC m=+1.258445566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.233098 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f91986918486d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.736779373 +0000 UTC m=+1.258524181,LastTimestamp:2025-12-09 14:14:22.736779373 +0000 UTC m=+1.258524181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.237472 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f919869298cc4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.73791098 +0000 UTC m=+1.259655788,LastTimestamp:2025-12-09 14:14:22.73791098 +0000 UTC m=+1.259655788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.241256 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f919869e59b5f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.750235487 +0000 UTC m=+1.271980285,LastTimestamp:2025-12-09 14:14:22.750235487 +0000 UTC m=+1.271980285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.245053 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f919869f9beb4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.751555252 +0000 UTC m=+1.273300050,LastTimestamp:2025-12-09 14:14:22.751555252 +0000 UTC m=+1.273300050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.248719 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f91986a04608f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.752252047 +0000 UTC m=+1.273996865,LastTimestamp:2025-12-09 14:14:22.752252047 +0000 UTC m=+1.273996865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.252421 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f91986a2d56f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.754936565 +0000 UTC m=+1.276681373,LastTimestamp:2025-12-09 14:14:22.754936565 +0000 UTC m=+1.276681373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.255800 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91986a382773 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.755645299 +0000 UTC m=+1.277390117,LastTimestamp:2025-12-09 14:14:22.755645299 +0000 UTC m=+1.277390117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.261970 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f91986a63b457 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:22.758499415 +0000 UTC m=+1.280244233,LastTimestamp:2025-12-09 14:14:22.758499415 +0000 UTC m=+1.280244233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.267754 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91987bbe2122 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.049638178 +0000 UTC m=+1.571383016,LastTimestamp:2025-12-09 14:14:23.049638178 +0000 UTC m=+1.571383016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.271898 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91987c759abb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.061662395 +0000 UTC m=+1.583407243,LastTimestamp:2025-12-09 14:14:23.061662395 +0000 UTC m=+1.583407243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.275805 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91987c8e6032 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.06328581 +0000 UTC m=+1.585030648,LastTimestamp:2025-12-09 14:14:23.06328581 +0000 UTC m=+1.585030648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.279938 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9198936624d2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.446525138 +0000 UTC m=+1.968269976,LastTimestamp:2025-12-09 14:14:23.446525138 +0000 UTC m=+1.968269976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.285177 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f919893eda569 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.455405417 +0000 UTC m=+1.977150255,LastTimestamp:2025-12-09 14:14:23.455405417 +0000 UTC m=+1.977150255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.291580 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f919894033b44 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.456820036 +0000 UTC m=+1.978564874,LastTimestamp:2025-12-09 14:14:23.456820036 +0000 UTC m=+1.978564874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.295396 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9198a45debc5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.731198917 +0000 UTC m=+2.252943715,LastTimestamp:2025-12-09 14:14:23.731198917 +0000 UTC m=+2.252943715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.299013 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9198a4f78fbb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.741267899 +0000 UTC m=+2.263012697,LastTimestamp:2025-12-09 14:14:23.741267899 +0000 UTC m=+2.263012697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.302920 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9198a6b29d25 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.770303781 +0000 UTC m=+2.292048589,LastTimestamp:2025-12-09 14:14:23.770303781 +0000 UTC m=+2.292048589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.306484 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9198a6db6c8f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.772978319 +0000 UTC m=+2.294723117,LastTimestamp:2025-12-09 14:14:23.772978319 +0000 UTC m=+2.294723117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.310752 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198a70ba276 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.776137846 +0000 UTC m=+2.297882644,LastTimestamp:2025-12-09 14:14:23.776137846 +0000 UTC m=+2.297882644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.315247 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198a79067f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:23.784839157 +0000 UTC m=+2.306583955,LastTimestamp:2025-12-09 14:14:23.784839157 +0000 UTC m=+2.306583955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.319428 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198b4ad5775 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.004839285 +0000 UTC m=+2.526584083,LastTimestamp:2025-12-09 14:14:24.004839285 +0000 UTC m=+2.526584083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.325603 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9198b4c5c0e6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.006439142 +0000 UTC m=+2.528183940,LastTimestamp:2025-12-09 14:14:24.006439142 +0000 UTC m=+2.528183940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.331974 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9198b4cac897 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.006768791 +0000 UTC m=+2.528513579,LastTimestamp:2025-12-09 14:14:24.006768791 +0000 UTC m=+2.528513579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.338413 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198b515a7f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.011675635 +0000 UTC m=+2.533420443,LastTimestamp:2025-12-09 14:14:24.011675635 +0000 UTC m=+2.533420443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.342943 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198b52b19b7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.013081015 +0000 UTC m=+2.534825813,LastTimestamp:2025-12-09 14:14:24.013081015 +0000 UTC m=+2.534825813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.348997 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198b5402524 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.014460196 +0000 UTC m=+2.536204994,LastTimestamp:2025-12-09 14:14:24.014460196 +0000 UTC m=+2.536204994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.353911 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9198b5646f21 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.016838433 +0000 UTC m=+2.538583231,LastTimestamp:2025-12-09 14:14:24.016838433 +0000 UTC m=+2.538583231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.359530 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198b6523c95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.032423061 +0000 UTC m=+2.554167859,LastTimestamp:2025-12-09 14:14:24.032423061 +0000 UTC m=+2.554167859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.364115 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9198b65590df openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.032641247 +0000 UTC m=+2.554386045,LastTimestamp:2025-12-09 14:14:24.032641247 +0000 UTC m=+2.554386045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.368740 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198b661e5b3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.033449395 +0000 UTC m=+2.555194193,LastTimestamp:2025-12-09 14:14:24.033449395 +0000 UTC m=+2.555194193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.372616 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198c25aad36 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.234302774 +0000 UTC m=+2.756047582,LastTimestamp:2025-12-09 14:14:24.234302774 +0000 UTC m=+2.756047582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.375914 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198c2731b6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.23590385 +0000 UTC m=+2.757648638,LastTimestamp:2025-12-09 14:14:24.23590385 +0000 UTC m=+2.757648638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.382017 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198c3099e39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.245767737 +0000 UTC m=+2.767512545,LastTimestamp:2025-12-09 14:14:24.245767737 +0000 UTC m=+2.767512545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.385881 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198c319b2a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.246821542 +0000 UTC m=+2.768566350,LastTimestamp:2025-12-09 14:14:24.246821542 +0000 UTC m=+2.768566350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.389942 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198c3520e49 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.250515017 +0000 UTC m=+2.772259815,LastTimestamp:2025-12-09 14:14:24.250515017 +0000 UTC m=+2.772259815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.395489 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198c370d2db openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.252531419 +0000 UTC m=+2.774276207,LastTimestamp:2025-12-09 14:14:24.252531419 +0000 UTC m=+2.774276207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.399548 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198cf3a574c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.450287436 +0000 UTC m=+2.972032234,LastTimestamp:2025-12-09 14:14:24.450287436 +0000 UTC m=+2.972032234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.403640 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198d043e33b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.467690299 +0000 UTC m=+2.989435097,LastTimestamp:2025-12-09 14:14:24.467690299 +0000 UTC m=+2.989435097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.408771 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198d0517fb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.468582329 +0000 UTC m=+2.990327127,LastTimestamp:2025-12-09 14:14:24.468582329 +0000 UTC m=+2.990327127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.414457 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198d06f8b4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.470551373 +0000 UTC m=+2.992296171,LastTimestamp:2025-12-09 14:14:24.470551373 +0000 UTC m=+2.992296171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.420139 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9198d1deb727 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.494614311 +0000 UTC m=+3.016359109,LastTimestamp:2025-12-09 14:14:24.494614311 +0000 UTC m=+3.016359109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.424773 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198dcf8feae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.680885934 +0000 UTC m=+3.202630742,LastTimestamp:2025-12-09 14:14:24.680885934 +0000 UTC m=+3.202630742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.429133 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198ddf31053 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.697274451 +0000 UTC m=+3.219019249,LastTimestamp:2025-12-09 14:14:24.697274451 +0000 UTC m=+3.219019249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.434056 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198de002f6e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.698134382 +0000 UTC m=+3.219879170,LastTimestamp:2025-12-09 14:14:24.698134382 +0000 UTC m=+3.219879170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.438714 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9198e3a3769b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.792721051 +0000 UTC m=+3.314465889,LastTimestamp:2025-12-09 14:14:24.792721051 +0000 UTC m=+3.314465889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.444555 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198f2a877ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.044707278 +0000 UTC m=+3.566452076,LastTimestamp:2025-12-09 14:14:25.044707278 +0000 UTC m=+3.566452076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.451666 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9198f2bddd01 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.046109441 +0000 UTC m=+3.567854239,LastTimestamp:2025-12-09 14:14:25.046109441 +0000 UTC m=+3.567854239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.456456 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198f3a38105 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.061159173 +0000 UTC m=+3.582903971,LastTimestamp:2025-12-09 14:14:25.061159173 +0000 UTC m=+3.582903971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.460392 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9198f3bf91c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.062998471 +0000 UTC m=+3.584743269,LastTimestamp:2025-12-09 14:14:25.062998471 +0000 UTC m=+3.584743269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.467387 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9199208807c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.814333379 +0000 UTC m=+4.336078207,LastTimestamp:2025-12-09 14:14:25.814333379 +0000 UTC m=+4.336078207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.475020 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91992f8b2c10 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.06619752 +0000 UTC m=+4.587942348,LastTimestamp:2025-12-09 14:14:26.06619752 +0000 UTC m=+4.587942348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.480113 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91993055f780 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.079487872 +0000 UTC m=+4.601232710,LastTimestamp:2025-12-09 14:14:26.079487872 +0000 UTC m=+4.601232710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.486162 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9199306b80c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.080899264 +0000 UTC m=+4.602644092,LastTimestamp:2025-12-09 14:14:26.080899264 +0000 UTC m=+4.602644092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.493373 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91993eddbfd0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.323267536 +0000 UTC m=+4.845012354,LastTimestamp:2025-12-09 14:14:26.323267536 +0000 UTC m=+4.845012354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.498221 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91993f9e74a7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.335896743 +0000 UTC m=+4.857641551,LastTimestamp:2025-12-09 14:14:26.335896743 +0000 UTC m=+4.857641551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.503379 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91993fb0d4cd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.337101005 +0000 UTC m=+4.858845833,LastTimestamp:2025-12-09 14:14:26.337101005 +0000 UTC m=+4.858845833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.507500 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91994de58acc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.575436492 +0000 UTC m=+5.097181290,LastTimestamp:2025-12-09 14:14:26.575436492 +0000 UTC m=+5.097181290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.512698 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91994ed6b585 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.591241605 +0000 UTC m=+5.112986403,LastTimestamp:2025-12-09 14:14:26.591241605 +0000 UTC m=+5.112986403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.517178 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57136->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.517237 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57136->192.168.126.11:17697: read: connection reset by peer" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.517368 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57148->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.517448 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57148->192.168.126.11:17697: read: connection reset by peer" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.517821 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91994eeb0bc4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.592574404 +0000 UTC m=+5.114319202,LastTimestamp:2025-12-09 14:14:26.592574404 +0000 UTC m=+5.114319202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.518078 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.518210 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.523141 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91995cd9f2b1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.826334897 +0000 UTC m=+5.348079695,LastTimestamp:2025-12-09 14:14:26.826334897 +0000 UTC m=+5.348079695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.528437 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91995d5e8994 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.835024276 +0000 UTC m=+5.356769094,LastTimestamp:2025-12-09 14:14:26.835024276 +0000 UTC m=+5.356769094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.535715 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91995d73ad04 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:26.836409604 +0000 UTC m=+5.358154402,LastTimestamp:2025-12-09 14:14:26.836409604 +0000 UTC m=+5.358154402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.542378 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91996a135976 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:27.048200566 +0000 UTC m=+5.569945364,LastTimestamp:2025-12-09 14:14:27.048200566 +0000 UTC m=+5.569945364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.545909 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91996b2d46e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:27.066676968 +0000 UTC m=+5.588421776,LastTimestamp:2025-12-09 14:14:27.066676968 +0000 UTC m=+5.588421776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.551421 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 09 14:14:41 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-controller-manager-crc.187f919b7c51abf9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 09 14:14:41 crc kubenswrapper[5116]: body: Dec 09 14:14:41 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:35.944209401 +0000 UTC m=+14.465954219,LastTimestamp:2025-12-09 14:14:35.944209401 +0000 UTC m=+14.465954219,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:14:41 crc kubenswrapper[5116]: > Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.555144 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f919b7c52e143 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:35.944288579 +0000 UTC m=+14.466033397,LastTimestamp:2025-12-09 14:14:35.944288579 +0000 UTC m=+14.466033397,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.559994 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:14:41 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f919b803dad30 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 09 14:14:41 crc kubenswrapper[5116]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:14:41 crc kubenswrapper[5116]: Dec 09 14:14:41 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:36.010007856 +0000 UTC m=+14.531752694,LastTimestamp:2025-12-09 14:14:36.010007856 +0000 UTC m=+14.531752694,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:14:41 crc kubenswrapper[5116]: > Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.567339 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919b803ed619 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:36.010083865 +0000 UTC m=+14.531828693,LastTimestamp:2025-12-09 14:14:36.010083865 +0000 UTC m=+14.531828693,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.576008 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:14:41 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f919b80b051f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 09 14:14:41 crc kubenswrapper[5116]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 09 14:14:41 crc kubenswrapper[5116]: Dec 09 14:14:41 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:36.017521142 +0000 UTC m=+14.539265960,LastTimestamp:2025-12-09 14:14:36.017521142 +0000 UTC m=+14.539265960,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:14:41 crc kubenswrapper[5116]: > Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.584047 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f919b803ed619\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919b803ed619 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:36.010083865 +0000 UTC m=+14.531828693,LastTimestamp:2025-12-09 14:14:36.017582461 +0000 UTC m=+14.539327259,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.588686 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:14:41 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f919cc87f019b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:57136->192.168.126.11:17697: read: connection reset by peer Dec 09 14:14:41 crc kubenswrapper[5116]: body: Dec 09 14:14:41 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:41.517216155 +0000 UTC m=+20.038960953,LastTimestamp:2025-12-09 14:14:41.517216155 +0000 UTC m=+20.038960953,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:14:41 crc kubenswrapper[5116]: > Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.592429 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919cc87f990e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57136->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:41.517254926 +0000 UTC m=+20.038999724,LastTimestamp:2025-12-09 14:14:41.517254926 +0000 UTC m=+20.038999724,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.599298 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:14:41 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f919cc88244fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:57148->192.168.126.11:17697: read: connection reset by peer Dec 09 14:14:41 crc kubenswrapper[5116]: body: Dec 09 14:14:41 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:41.517430011 +0000 UTC m=+20.039174809,LastTimestamp:2025-12-09 14:14:41.517430011 +0000 UTC m=+20.039174809,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:14:41 crc kubenswrapper[5116]: > Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.603602 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919cc8837e59 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57148->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:41.517510233 +0000 UTC m=+20.039255031,LastTimestamp:2025-12-09 14:14:41.517510233 +0000 UTC m=+20.039255031,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.607482 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:14:41 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f919cc88e08f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 09 14:14:41 crc kubenswrapper[5116]: body: Dec 09 14:14:41 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:41.518201081 +0000 UTC m=+20.039945879,LastTimestamp:2025-12-09 14:14:41.518201081 +0000 UTC m=+20.039945879,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:14:41 crc kubenswrapper[5116]: > Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.613233 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919cc88f3681 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:41.518278273 +0000 UTC m=+20.040023071,LastTimestamp:2025-12-09 14:14:41.518278273 +0000 UTC m=+20.040023071,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.683380 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.802283 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.870227 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.872525 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a1bae9d6fcefae5165b4caff1061a927b864dd63dcee6ed1bdd091f0e4ac2d99" exitCode=255 Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.872616 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"a1bae9d6fcefae5165b4caff1061a927b864dd63dcee6ed1bdd091f0e4ac2d99"} Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.872902 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.873503 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.873543 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.873555 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.873978 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:41 crc kubenswrapper[5116]: I1209 14:14:41.874245 5116 scope.go:117] "RemoveContainer" containerID="a1bae9d6fcefae5165b4caff1061a927b864dd63dcee6ed1bdd091f0e4ac2d99" Dec 09 14:14:41 crc kubenswrapper[5116]: E1209 14:14:41.881867 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9198de002f6e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198de002f6e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.698134382 +0000 UTC m=+3.219879170,LastTimestamp:2025-12-09 14:14:41.875464048 +0000 UTC m=+20.397208856,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:42 crc kubenswrapper[5116]: E1209 14:14:42.100098 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9198f2a877ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198f2a877ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.044707278 +0000 UTC m=+3.566452076,LastTimestamp:2025-12-09 14:14:42.093253978 +0000 UTC m=+20.614998776,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:42 crc kubenswrapper[5116]: E1209 14:14:42.118384 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9198f3a38105\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198f3a38105 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.061159173 +0000 UTC m=+3.582903971,LastTimestamp:2025-12-09 14:14:42.109685325 +0000 UTC m=+20.631430123,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.683923 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.877295 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.878715 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854"} Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.878870 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.879435 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.879463 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.879471 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:42 crc kubenswrapper[5116]: E1209 14:14:42.879711 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.947469 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.947656 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.948713 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.948745 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.948757 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:42 crc kubenswrapper[5116]: E1209 14:14:42.949084 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:42 crc kubenswrapper[5116]: I1209 14:14:42.954418 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.683230 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.883242 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.884490 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.886636 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854" exitCode=255 Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.886771 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854"} Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.886866 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.887008 5116 scope.go:117] "RemoveContainer" containerID="a1bae9d6fcefae5165b4caff1061a927b864dd63dcee6ed1bdd091f0e4ac2d99" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.887159 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.887521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.887564 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.887579 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.887928 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.888082 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.888190 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:43 crc kubenswrapper[5116]: E1209 14:14:43.888008 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:43 crc kubenswrapper[5116]: E1209 14:14:43.889251 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:43 crc kubenswrapper[5116]: I1209 14:14:43.889725 5116 scope.go:117] "RemoveContainer" containerID="47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854" Dec 09 14:14:43 crc kubenswrapper[5116]: E1209 14:14:43.890100 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:14:43 crc kubenswrapper[5116]: E1209 14:14:43.894528 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919d55eda871 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,LastTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:44 crc kubenswrapper[5116]: E1209 14:14:44.313848 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.530508 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.531908 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.532075 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.532105 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.532150 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:44 crc kubenswrapper[5116]: E1209 14:14:44.544437 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.684633 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:44 crc kubenswrapper[5116]: I1209 14:14:44.890334 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.684355 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.973410 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.973786 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.975154 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.975209 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.975230 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:45 crc kubenswrapper[5116]: E1209 14:14:45.975725 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:45 crc kubenswrapper[5116]: I1209 14:14:45.976176 5116 scope.go:117] "RemoveContainer" containerID="47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854" Dec 09 14:14:45 crc kubenswrapper[5116]: E1209 14:14:45.976486 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:14:45 crc kubenswrapper[5116]: E1209 14:14:45.985557 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f919d55eda871\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919d55eda871 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,LastTimestamp:2025-12-09 14:14:45.976434703 +0000 UTC m=+24.498179541,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:46 crc kubenswrapper[5116]: E1209 14:14:46.140111 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:14:46 crc kubenswrapper[5116]: I1209 14:14:46.686264 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:47 crc kubenswrapper[5116]: I1209 14:14:47.685765 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:48 crc kubenswrapper[5116]: E1209 14:14:48.280049 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:14:48 crc kubenswrapper[5116]: I1209 14:14:48.687170 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:49 crc kubenswrapper[5116]: I1209 14:14:49.684709 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:50 crc kubenswrapper[5116]: I1209 14:14:50.683038 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:50 crc kubenswrapper[5116]: E1209 14:14:50.898423 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:14:51 crc kubenswrapper[5116]: E1209 14:14:51.322896 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:14:51 crc kubenswrapper[5116]: I1209 14:14:51.545518 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:51 crc kubenswrapper[5116]: I1209 14:14:51.547246 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:51 crc kubenswrapper[5116]: I1209 14:14:51.547318 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:51 crc kubenswrapper[5116]: I1209 14:14:51.547339 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:51 crc kubenswrapper[5116]: I1209 14:14:51.547381 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:51 crc kubenswrapper[5116]: E1209 14:14:51.561348 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:14:51 crc kubenswrapper[5116]: I1209 14:14:51.686023 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:51 crc kubenswrapper[5116]: E1209 14:14:51.803330 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.684428 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.879510 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.879835 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.880914 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.881022 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.881093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:52 crc kubenswrapper[5116]: E1209 14:14:52.881433 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:14:52 crc kubenswrapper[5116]: I1209 14:14:52.881702 5116 scope.go:117] "RemoveContainer" containerID="47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854" Dec 09 14:14:52 crc kubenswrapper[5116]: E1209 14:14:52.881948 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:14:52 crc kubenswrapper[5116]: E1209 14:14:52.890657 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f919d55eda871\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919d55eda871 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,LastTimestamp:2025-12-09 14:14:52.881920059 +0000 UTC m=+31.403664857,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:14:52 crc kubenswrapper[5116]: E1209 14:14:52.995289 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:14:53 crc kubenswrapper[5116]: E1209 14:14:53.083076 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:14:53 crc kubenswrapper[5116]: I1209 14:14:53.686786 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:54 crc kubenswrapper[5116]: I1209 14:14:54.684158 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:55 crc kubenswrapper[5116]: I1209 14:14:55.685597 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:56 crc kubenswrapper[5116]: I1209 14:14:56.685863 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:57 crc kubenswrapper[5116]: I1209 14:14:57.680011 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:58 crc kubenswrapper[5116]: E1209 14:14:58.331355 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:14:58 crc kubenswrapper[5116]: I1209 14:14:58.562078 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:14:58 crc kubenswrapper[5116]: I1209 14:14:58.563538 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:14:58 crc kubenswrapper[5116]: I1209 14:14:58.563608 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:14:58 crc kubenswrapper[5116]: I1209 14:14:58.563628 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:14:58 crc kubenswrapper[5116]: I1209 14:14:58.563664 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:14:58 crc kubenswrapper[5116]: E1209 14:14:58.575737 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:14:58 crc kubenswrapper[5116]: I1209 14:14:58.686563 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:14:59 crc kubenswrapper[5116]: I1209 14:14:59.684655 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:00 crc kubenswrapper[5116]: I1209 14:15:00.686262 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:01 crc kubenswrapper[5116]: I1209 14:15:01.683534 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:01 crc kubenswrapper[5116]: E1209 14:15:01.804184 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:15:02 crc kubenswrapper[5116]: I1209 14:15:02.683437 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:03 crc kubenswrapper[5116]: E1209 14:15:03.356284 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:15:03 crc kubenswrapper[5116]: I1209 14:15:03.684467 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:04 crc kubenswrapper[5116]: I1209 14:15:04.683365 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:05 crc kubenswrapper[5116]: E1209 14:15:05.339948 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.576031 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.577157 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.577466 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.577702 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.577955 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:15:05 crc kubenswrapper[5116]: E1209 14:15:05.594535 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.685564 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.748025 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.749259 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.749466 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.749602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:05 crc kubenswrapper[5116]: E1209 14:15:05.750287 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:05 crc kubenswrapper[5116]: I1209 14:15:05.750802 5116 scope.go:117] "RemoveContainer" containerID="47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854" Dec 09 14:15:05 crc kubenswrapper[5116]: E1209 14:15:05.760123 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9198de002f6e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198de002f6e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:24.698134382 +0000 UTC m=+3.219879170,LastTimestamp:2025-12-09 14:15:05.752285967 +0000 UTC m=+44.274030765,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:06 crc kubenswrapper[5116]: E1209 14:15:06.025799 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9198f2a877ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198f2a877ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.044707278 +0000 UTC m=+3.566452076,LastTimestamp:2025-12-09 14:15:06.020562139 +0000 UTC m=+44.542306937,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:06 crc kubenswrapper[5116]: E1209 14:15:06.037507 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9198f3a38105\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9198f3a38105 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:25.061159173 +0000 UTC m=+3.582903971,LastTimestamp:2025-12-09 14:15:06.033275027 +0000 UTC m=+44.555019825,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:06 crc kubenswrapper[5116]: E1209 14:15:06.086392 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.684286 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.988099 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.990815 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c"} Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.991032 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.991594 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.991659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:06 crc kubenswrapper[5116]: I1209 14:15:06.991681 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:06 crc kubenswrapper[5116]: E1209 14:15:06.992186 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:07 crc kubenswrapper[5116]: I1209 14:15:07.684538 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:08 crc kubenswrapper[5116]: I1209 14:15:08.686825 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:08 crc kubenswrapper[5116]: I1209 14:15:08.997945 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:15:08 crc kubenswrapper[5116]: I1209 14:15:08.998684 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.001774 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c" exitCode=255 Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.001848 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c"} Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.001916 5116 scope.go:117] "RemoveContainer" containerID="47f3c7915db13931f1da866e91f9707eee01aa23827ae2fca5543101bd16c854" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.002317 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.003460 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.003543 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.003571 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:09 crc kubenswrapper[5116]: E1209 14:15:09.004220 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.004739 5116 scope.go:117] "RemoveContainer" containerID="c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c" Dec 09 14:15:09 crc kubenswrapper[5116]: E1209 14:15:09.005170 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:09 crc kubenswrapper[5116]: E1209 14:15:09.011777 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f919d55eda871\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919d55eda871 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,LastTimestamp:2025-12-09 14:15:09.005110436 +0000 UTC m=+47.526855274,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:09 crc kubenswrapper[5116]: I1209 14:15:09.685862 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:10 crc kubenswrapper[5116]: I1209 14:15:10.008545 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:15:10 crc kubenswrapper[5116]: I1209 14:15:10.689125 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:11 crc kubenswrapper[5116]: I1209 14:15:11.683595 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:11 crc kubenswrapper[5116]: E1209 14:15:11.805272 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.092014 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.092231 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.093587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.093632 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.093648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:12 crc kubenswrapper[5116]: E1209 14:15:12.094032 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:12 crc kubenswrapper[5116]: E1209 14:15:12.350054 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.596109 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.597470 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.597524 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.597545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.597583 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:15:12 crc kubenswrapper[5116]: E1209 14:15:12.603669 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:15:12 crc kubenswrapper[5116]: E1209 14:15:12.608392 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:15:12 crc kubenswrapper[5116]: I1209 14:15:12.685660 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:13 crc kubenswrapper[5116]: I1209 14:15:13.688022 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:14 crc kubenswrapper[5116]: I1209 14:15:14.685723 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.690439 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.973609 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.974042 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.975156 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.975220 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.975236 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:15 crc kubenswrapper[5116]: E1209 14:15:15.975791 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:15 crc kubenswrapper[5116]: I1209 14:15:15.976149 5116 scope.go:117] "RemoveContainer" containerID="c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c" Dec 09 14:15:15 crc kubenswrapper[5116]: E1209 14:15:15.976419 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:15 crc kubenswrapper[5116]: E1209 14:15:15.984690 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f919d55eda871\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919d55eda871 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,LastTimestamp:2025-12-09 14:15:15.976388081 +0000 UTC m=+54.498132879,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.685571 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.991291 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.991643 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.992850 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.992895 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.992920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:16 crc kubenswrapper[5116]: E1209 14:15:16.993576 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:16 crc kubenswrapper[5116]: I1209 14:15:16.993994 5116 scope.go:117] "RemoveContainer" containerID="c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c" Dec 09 14:15:16 crc kubenswrapper[5116]: E1209 14:15:16.994372 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:17 crc kubenswrapper[5116]: E1209 14:15:17.002744 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f919d55eda871\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f919d55eda871 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:14:43.890055281 +0000 UTC m=+22.411800079,LastTimestamp:2025-12-09 14:15:16.99431046 +0000 UTC m=+55.516055288,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:17 crc kubenswrapper[5116]: I1209 14:15:17.687064 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:18 crc kubenswrapper[5116]: E1209 14:15:18.632314 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:15:18 crc kubenswrapper[5116]: I1209 14:15:18.686551 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:19 crc kubenswrapper[5116]: E1209 14:15:19.359581 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:15:19 crc kubenswrapper[5116]: I1209 14:15:19.609418 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:19 crc kubenswrapper[5116]: I1209 14:15:19.611305 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:19 crc kubenswrapper[5116]: I1209 14:15:19.611369 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:19 crc kubenswrapper[5116]: I1209 14:15:19.611389 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:19 crc kubenswrapper[5116]: I1209 14:15:19.611433 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:15:19 crc kubenswrapper[5116]: E1209 14:15:19.624542 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:15:19 crc kubenswrapper[5116]: I1209 14:15:19.685429 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:20 crc kubenswrapper[5116]: I1209 14:15:20.683581 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:21 crc kubenswrapper[5116]: I1209 14:15:21.685043 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:21 crc kubenswrapper[5116]: E1209 14:15:21.806272 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:15:22 crc kubenswrapper[5116]: I1209 14:15:22.690277 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:23 crc kubenswrapper[5116]: I1209 14:15:23.683769 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:24 crc kubenswrapper[5116]: I1209 14:15:24.682784 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:25 crc kubenswrapper[5116]: I1209 14:15:25.686625 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:15:26 crc kubenswrapper[5116]: E1209 14:15:26.368489 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.625456 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.626588 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.626635 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.626649 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.626676 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:15:26 crc kubenswrapper[5116]: E1209 14:15:26.636708 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.653880 5116 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-h8skk" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.661453 5116 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-h8skk" Dec 09 14:15:26 crc kubenswrapper[5116]: I1209 14:15:26.703316 5116 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 14:15:27 crc kubenswrapper[5116]: I1209 14:15:27.621669 5116 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 09 14:15:27 crc kubenswrapper[5116]: I1209 14:15:27.663133 5116 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-08 14:10:26 +0000 UTC" deadline="2026-01-04 07:46:23.479339043 +0000 UTC" Dec 09 14:15:27 crc kubenswrapper[5116]: I1209 14:15:27.663187 5116 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="617h30m55.816157146s" Dec 09 14:15:28 crc kubenswrapper[5116]: I1209 14:15:28.748930 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:28 crc kubenswrapper[5116]: I1209 14:15:28.750509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:28 crc kubenswrapper[5116]: I1209 14:15:28.750557 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:28 crc kubenswrapper[5116]: I1209 14:15:28.750568 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:28 crc kubenswrapper[5116]: E1209 14:15:28.751196 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:28 crc kubenswrapper[5116]: I1209 14:15:28.751573 5116 scope.go:117] "RemoveContainer" containerID="c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c" Dec 09 14:15:29 crc kubenswrapper[5116]: I1209 14:15:29.068817 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:15:29 crc kubenswrapper[5116]: I1209 14:15:29.071165 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176"} Dec 09 14:15:29 crc kubenswrapper[5116]: I1209 14:15:29.071476 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:29 crc kubenswrapper[5116]: I1209 14:15:29.072292 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:29 crc kubenswrapper[5116]: I1209 14:15:29.072340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:29 crc kubenswrapper[5116]: I1209 14:15:29.072360 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:29 crc kubenswrapper[5116]: E1209 14:15:29.072945 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.079125 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.080163 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.082281 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" exitCode=255 Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.082366 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176"} Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.082431 5116 scope.go:117] "RemoveContainer" containerID="c6edd1519a15dc4109f6c4e3b13e70144b35b26b49d20e7f24357a5d3a8db41c" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.082877 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.083730 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.083765 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.083777 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:31 crc kubenswrapper[5116]: E1209 14:15:31.084218 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:31 crc kubenswrapper[5116]: I1209 14:15:31.084456 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:15:31 crc kubenswrapper[5116]: E1209 14:15:31.084676 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:31 crc kubenswrapper[5116]: E1209 14:15:31.807342 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:15:32 crc kubenswrapper[5116]: I1209 14:15:32.087613 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.637363 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.638595 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.638655 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.638674 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.638777 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.645577 5116 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.645855 5116 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.645876 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.648676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.648715 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.648727 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.648744 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.648757 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:33Z","lastTransitionTime":"2025-12-09T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.663051 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.670551 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.670589 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.670598 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.670611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.670620 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:33Z","lastTransitionTime":"2025-12-09T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.682794 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.690865 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.690916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.690931 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.690967 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.690990 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:33Z","lastTransitionTime":"2025-12-09T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.702576 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.710022 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.710063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.710076 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.710093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:33 crc kubenswrapper[5116]: I1209 14:15:33.710105 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:33Z","lastTransitionTime":"2025-12-09T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.719030 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.719174 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.719202 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.819465 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:33 crc kubenswrapper[5116]: E1209 14:15:33.920376 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.021370 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.122711 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.223647 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.324735 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.425776 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.526605 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.626725 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.727623 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.828266 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:34 crc kubenswrapper[5116]: E1209 14:15:34.928592 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.028673 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.129049 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.229475 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.330341 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.430839 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.531998 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.633044 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.734120 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.835183 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.936685 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:35 crc kubenswrapper[5116]: I1209 14:15:35.973277 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:35 crc kubenswrapper[5116]: I1209 14:15:35.973758 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:35 crc kubenswrapper[5116]: I1209 14:15:35.975170 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:35 crc kubenswrapper[5116]: I1209 14:15:35.975271 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:35 crc kubenswrapper[5116]: I1209 14:15:35.975300 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.976133 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:35 crc kubenswrapper[5116]: I1209 14:15:35.976536 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:15:35 crc kubenswrapper[5116]: E1209 14:15:35.976876 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.037615 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.138451 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.238989 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.340083 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.441057 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.541502 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.642429 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.743199 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.843599 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:36 crc kubenswrapper[5116]: E1209 14:15:36.944689 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.045113 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.146206 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.246901 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.347815 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.448422 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.548559 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.649037 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.749947 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.851153 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:37 crc kubenswrapper[5116]: E1209 14:15:37.951367 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.051918 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.152447 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.253257 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.354303 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.455359 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.556393 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.656794 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.757839 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.858310 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:38 crc kubenswrapper[5116]: E1209 14:15:38.958517 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.059641 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.071944 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.072510 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.073694 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.073816 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.073838 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.074526 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.074947 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.075306 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.160707 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.261417 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.362607 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.463437 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.564461 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.664801 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.748095 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.749301 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.749358 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:39 crc kubenswrapper[5116]: I1209 14:15:39.749383 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.750053 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.765951 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.866267 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:39 crc kubenswrapper[5116]: E1209 14:15:39.967329 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.068335 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.168479 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.268689 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.369675 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: I1209 14:15:40.468300 5116 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.470640 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.571808 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.672434 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.773029 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.873573 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:40 crc kubenswrapper[5116]: E1209 14:15:40.973947 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.074265 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.175530 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.276599 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.377664 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.478758 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.579669 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.680190 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.780332 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.808246 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.880703 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:41 crc kubenswrapper[5116]: E1209 14:15:41.981681 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.082843 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.186053 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.286522 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.386710 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: I1209 14:15:42.451208 5116 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.487622 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.588694 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.689851 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.790190 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.891079 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:42 crc kubenswrapper[5116]: E1209 14:15:42.991604 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.091737 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.192563 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.292685 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.393830 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.494621 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.594757 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.695779 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.796791 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.874211 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.879162 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.879208 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.879220 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.879239 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.879251 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:43Z","lastTransitionTime":"2025-12-09T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.893777 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.898093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.898166 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.898185 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.898211 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.898229 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:43Z","lastTransitionTime":"2025-12-09T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.913580 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.917733 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.917807 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.917833 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.917866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.917890 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:43Z","lastTransitionTime":"2025-12-09T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.932649 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.937082 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.937123 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.937160 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.937178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:43 crc kubenswrapper[5116]: I1209 14:15:43.937191 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:43Z","lastTransitionTime":"2025-12-09T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.951865 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.952200 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:15:43 crc kubenswrapper[5116]: E1209 14:15:43.952244 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.052353 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.152658 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.253456 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.353938 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.454466 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.555613 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.656063 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.756198 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.857066 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:44 crc kubenswrapper[5116]: E1209 14:15:44.957291 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.058057 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.158833 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.259188 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.360144 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.461175 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.562179 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.662551 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.763602 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.864027 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:45 crc kubenswrapper[5116]: E1209 14:15:45.964390 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.065395 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.165720 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.266341 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.367523 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.468332 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.569520 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.670405 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: I1209 14:15:46.747742 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:46 crc kubenswrapper[5116]: I1209 14:15:46.752512 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:46 crc kubenswrapper[5116]: I1209 14:15:46.752572 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:46 crc kubenswrapper[5116]: I1209 14:15:46.752592 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.753690 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.770836 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.871399 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:46 crc kubenswrapper[5116]: E1209 14:15:46.971765 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.072635 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.173401 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.274125 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.374466 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.474776 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.575087 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.676006 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.776306 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.876546 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:47 crc kubenswrapper[5116]: E1209 14:15:47.976793 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.076980 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.178026 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.278931 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.379197 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.479336 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.579755 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.680729 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.781162 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.882021 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:48 crc kubenswrapper[5116]: E1209 14:15:48.983435 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.083646 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.184535 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.285646 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.386735 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.487322 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.587944 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.688455 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.789051 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.889425 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:49 crc kubenswrapper[5116]: E1209 14:15:49.989710 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.090947 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.191537 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.292950 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.393346 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.493640 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.594835 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.695417 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: I1209 14:15:50.749091 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:15:50 crc kubenswrapper[5116]: I1209 14:15:50.750610 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:50 crc kubenswrapper[5116]: I1209 14:15:50.750690 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:50 crc kubenswrapper[5116]: I1209 14:15:50.750712 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.751481 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:15:50 crc kubenswrapper[5116]: I1209 14:15:50.752127 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.752467 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.796093 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.896551 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:50 crc kubenswrapper[5116]: E1209 14:15:50.997207 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.098264 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.199329 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.215146 5116 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.290347 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.301791 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.301873 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.301898 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.301929 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.301985 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.305167 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.405666 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.405773 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.405832 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.405871 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.405896 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.406895 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.506119 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.508178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.508313 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.508335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.508361 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.508381 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.609865 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.611092 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.611138 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.611157 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.611182 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.611200 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.713894 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.713983 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.714003 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.714028 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.714050 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.728073 5116 apiserver.go:52] "Watching apiserver" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.737441 5116 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.738278 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2888f","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/network-metrics-daemon-pmt9f","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-image-registry/node-ca-26d6n","openshift-machine-config-operator/machine-config-daemon-phdhk","openshift-multus/multus-554lf","openshift-multus/multus-additional-cni-plugins-65brv","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-ovn-kubernetes/ovnkube-node-tg8rn"] Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.739803 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.741059 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.741196 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.742934 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.743492 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.745732 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.746473 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.746610 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.749872 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.751098 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.752488 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.752919 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.753902 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.754283 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.756713 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.756767 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.766934 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.767411 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.767126 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.769788 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.771815 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.773114 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.773449 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.773178 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.774249 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.783580 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.784763 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.785174 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.785306 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.785363 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.785426 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.791607 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.791747 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.794230 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.795601 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.798715 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.798917 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.798926 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.799524 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.799782 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.800167 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.800193 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.801305 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.803272 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.804122 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.804818 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.805194 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.805458 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.808460 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.810606 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.810856 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.811661 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.812172 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.812654 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.812820 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.813123 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.812681 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.812749 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.818730 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.819439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.819595 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.819729 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.819866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.820016 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.820752 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.824093 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.843257 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.855349 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.864468 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872019 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cnibin\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872066 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-etc-kubernetes\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872116 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872145 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-socket-dir-parent\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872182 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872217 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.872293 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872313 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872372 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbg2k\" (UniqueName: \"kubernetes.io/projected/2a441b53-f957-4f01-a123-a96c637c3fe2-kube-api-access-jbg2k\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.872423 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:52.372371452 +0000 UTC m=+90.894116260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872474 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872504 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-cnibin\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872535 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-conf-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872565 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/140ab739-f0e3-4429-8e23-03782755777d-proxy-tls\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872588 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/140ab739-f0e3-4429-8e23-03782755777d-mcd-auth-proxy-config\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872627 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cni-binary-copy\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872668 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb62t\" (UniqueName: \"kubernetes.io/projected/140ab739-f0e3-4429-8e23-03782755777d-kube-api-access-cb62t\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872697 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872729 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872757 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-cni-bin\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872782 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.872915 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873052 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a441b53-f957-4f01-a123-a96c637c3fe2-cni-binary-copy\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873552 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873607 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873747 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873792 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873828 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-system-cni-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873854 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-cni-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873883 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873914 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-os-release\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873937 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.873978 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-kubelet\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874006 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1e98f6b-f90a-4408-b303-926b753052ff-serviceca\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874039 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx5p\" (UniqueName: \"kubernetes.io/projected/c1e98f6b-f90a-4408-b303-926b753052ff-kube-api-access-dfx5p\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874076 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/140ab739-f0e3-4429-8e23-03782755777d-rootfs\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874100 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtm56\" (UniqueName: \"kubernetes.io/projected/280f2c67-05f3-4f21-bd2d-6a22add2b93e-kube-api-access-jtm56\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874125 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-netns\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874149 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-cni-multus\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874179 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874212 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874240 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-hostroot\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874267 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-multus-certs\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874293 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1e98f6b-f90a-4408-b303-926b753052ff-host\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874318 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-system-cni-dir\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874341 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-os-release\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874364 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-daemon-config\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874388 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-k8s-cni-cncf-io\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.874698 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.874860 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.874924 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:52.37490793 +0000 UTC m=+90.896652738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.875243 5116 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.876517 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.877513 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.883801 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.884832 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.887467 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.891373 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.891417 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.891439 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.891915 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:52.391886702 +0000 UTC m=+90.913631540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.893082 5116 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.896464 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.897208 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.899252 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.899276 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.899291 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.899354 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:52.3993357 +0000 UTC m=+90.921080508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.902385 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.915079 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.916222 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.922293 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.922347 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.922361 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.922381 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.922395 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:51Z","lastTransitionTime":"2025-12-09T14:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.929260 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.938095 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.947512 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.956691 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.972669 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975009 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975041 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975063 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975079 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975096 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975115 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975381 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975429 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975447 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975514 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975531 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975546 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975563 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975577 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975593 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975608 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975623 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975641 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975657 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975672 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975691 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975707 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975723 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975740 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975756 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975773 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975789 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975804 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975822 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975838 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975858 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975873 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975893 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975934 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975973 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.975988 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976001 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976016 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976016 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976030 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976048 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976071 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976105 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976129 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976155 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976177 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976212 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976236 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976258 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976278 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976285 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976303 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976326 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976348 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976369 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976386 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976417 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976435 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976452 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976471 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976526 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976544 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976569 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976626 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976650 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976675 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976701 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976728 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976752 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976773 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976801 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976822 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976846 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976868 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976560 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976668 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: E1209 14:15:51.977306 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:15:52.477269924 +0000 UTC m=+90.999014762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982628 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982696 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982743 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982791 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982843 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982890 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983051 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983108 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983147 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983177 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983204 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983228 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983255 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983281 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983306 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983331 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983360 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983395 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983427 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983450 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983473 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983499 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983580 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983638 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983703 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983731 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983755 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983780 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983808 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983838 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983863 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983892 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983943 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983998 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984027 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984052 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984078 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984103 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984129 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984157 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982839 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983511 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.977540 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.977518 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.977984 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.977986 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.978044 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.976768 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.978297 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.978397 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.978492 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.978566 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.978586 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979180 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979173 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979232 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979600 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979684 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979840 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.979928 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980126 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984411 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980215 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980522 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980731 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980738 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980749 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980910 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.980982 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981148 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981099 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981464 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981409 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981536 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981735 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.981732 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982093 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982133 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982156 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982196 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982366 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.982371 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983595 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983625 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.983933 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984053 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984050 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.977412 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984172 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984585 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984989 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984651 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.985091 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.984998 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.985175 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.985279 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.985524 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.985544 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.985842 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986091 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986158 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986220 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986401 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986438 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986462 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986563 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986700 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986712 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986827 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986919 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.986974 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987033 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987048 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987374 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987504 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987655 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987724 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987712 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.987931 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988020 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988081 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988136 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988192 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988248 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988305 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988316 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988327 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988358 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988415 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988442 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988371 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988468 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988523 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988578 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988631 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988686 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988742 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988810 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988870 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988925 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989022 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989090 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989151 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989227 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989279 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989319 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989355 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989400 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989439 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989478 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989517 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989569 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989610 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989648 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989690 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989728 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989770 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989808 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989847 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989943 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990032 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990080 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990116 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990154 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990189 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990228 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990268 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990308 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990348 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990384 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990422 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990457 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990496 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990534 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990571 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990610 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990649 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990689 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990729 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990767 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990805 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990853 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990890 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990932 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991006 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991046 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991096 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991136 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991182 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991221 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991260 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991309 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991352 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991392 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991434 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991476 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991519 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991560 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991599 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991650 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993011 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993117 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993187 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993251 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993320 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993404 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993636 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993712 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993788 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993861 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988468 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.994894 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988491 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988510 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988541 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.988877 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989332 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989521 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.989839 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990517 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.990645 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991018 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991647 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991733 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991775 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991809 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991757 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991936 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.991944 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995172 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992223 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992264 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992573 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992591 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992540 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992565 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992674 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.992725 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.993231 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995273 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995263 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.994124 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995321 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.994140 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.994243 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.994279 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.994806 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995496 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995543 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.995926 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.996064 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.996521 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.996619 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.996842 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997198 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997277 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997377 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997388 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997430 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997431 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997497 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997808 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.997867 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998066 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998183 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998545 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998570 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998786 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998887 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998923 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998920 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.998987 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.999113 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.999254 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.999517 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.999531 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:51 crc kubenswrapper[5116]: I1209 14:15:51.999771 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:51.999905 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.000004 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:51.999815 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.000023 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.000008 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.000096 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.000699 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.000858 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001046 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001065 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001535 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001732 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001741 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001763 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001779 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.001856 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.002257 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.002413 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.002649 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.002940 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.003075 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.002328 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004035 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004058 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004076 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004096 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004113 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004133 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004378 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004436 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004473 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004509 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004542 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004581 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004616 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004651 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004686 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004720 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004754 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005224 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005271 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005306 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005344 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005376 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005416 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005602 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq82j\" (UniqueName: \"kubernetes.io/projected/51843597-ba2b-4059-aa79-13887c6100f2-kube-api-access-wq82j\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005642 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2eddb9fd-1d3a-4992-b326-9271ffb360e7-tmp-dir\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005718 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbg2k\" (UniqueName: \"kubernetes.io/projected/2a441b53-f957-4f01-a123-a96c637c3fe2-kube-api-access-jbg2k\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005756 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-cnibin\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005788 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-conf-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005818 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-slash\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005850 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-netns\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005880 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-netd\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005918 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/140ab739-f0e3-4429-8e23-03782755777d-proxy-tls\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005978 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/140ab739-f0e3-4429-8e23-03782755777d-mcd-auth-proxy-config\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006020 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cni-binary-copy\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006056 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-var-lib-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006089 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-env-overrides\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006120 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2eddb9fd-1d3a-4992-b326-9271ffb360e7-hosts-file\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006156 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cb62t\" (UniqueName: \"kubernetes.io/projected/140ab739-f0e3-4429-8e23-03782755777d-kube-api-access-cb62t\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006200 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-cni-bin\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006223 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006249 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006278 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006301 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a441b53-f957-4f01-a123-a96c637c3fe2-cni-binary-copy\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006343 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-kubelet\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006371 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006394 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-config\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006453 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-cni-bin\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.003466 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004355 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004586 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004977 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.004935 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005316 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005563 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.005930 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006017 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006038 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006365 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006541 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.006718 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007040 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007212 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007343 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007476 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007845 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-cnibin\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007918 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/140ab739-f0e3-4429-8e23-03782755777d-mcd-auth-proxy-config\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.007605 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.008010 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-conf-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009071 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009116 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cni-binary-copy\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009268 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-system-cni-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009379 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-system-cni-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009380 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-cni-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009553 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-log-socket\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009616 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmnt\" (UniqueName: \"kubernetes.io/projected/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-kube-api-access-cvmnt\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009676 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-os-release\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009728 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009780 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-kubelet\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009827 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1e98f6b-f90a-4408-b303-926b753052ff-serviceca\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009884 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-systemd-units\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.009952 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfx5p\" (UniqueName: \"kubernetes.io/projected/c1e98f6b-f90a-4408-b303-926b753052ff-kube-api-access-dfx5p\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010043 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-ovn\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010089 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-ovn-kubernetes\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010188 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/140ab739-f0e3-4429-8e23-03782755777d-rootfs\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010279 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtm56\" (UniqueName: \"kubernetes.io/projected/280f2c67-05f3-4f21-bd2d-6a22add2b93e-kube-api-access-jtm56\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010340 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-netns\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010359 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a441b53-f957-4f01-a123-a96c637c3fe2-cni-binary-copy\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010386 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-cni-multus\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010519 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-hostroot\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010528 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-cni-dir\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010571 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-multus-certs\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010627 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-multus-certs\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010645 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1e98f6b-f90a-4408-b303-926b753052ff-host\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010738 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-netns\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-cni-multus\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010881 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1e98f6b-f90a-4408-b303-926b753052ff-host\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.010946 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011049 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-bin\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011097 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0df855a1-8389-4874-a68c-de5f76fe650a-ovn-node-metrics-cert\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011160 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-system-cni-dir\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011268 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-os-release\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011323 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-daemon-config\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011330 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-hostroot\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011377 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011423 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/140ab739-f0e3-4429-8e23-03782755777d-rootfs\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011436 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011492 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mnb\" (UniqueName: \"kubernetes.io/projected/2eddb9fd-1d3a-4992-b326-9271ffb360e7-kube-api-access-n7mnb\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011547 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-k8s-cni-cncf-io\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011651 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt5f7\" (UniqueName: \"kubernetes.io/projected/0df855a1-8389-4874-a68c-de5f76fe650a-kube-api-access-xt5f7\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011661 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011691 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011740 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cnibin\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011775 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-etc-kubernetes\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011808 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-script-lib\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011877 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011900 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-var-lib-kubelet\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011917 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-socket-dir-parent\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.012066 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-socket-dir-parent\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.012109 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-host-run-k8s-cni-cncf-io\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.012148 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-system-cni-dir\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.012248 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-os-release\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013307 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.011802 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-os-release\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013436 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013496 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-systemd\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013545 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-etc-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013570 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-node-log\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cnibin\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013795 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.013828 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a441b53-f957-4f01-a123-a96c637c3fe2-etc-kubernetes\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014029 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014048 5116 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014062 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014077 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014092 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014107 5116 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014121 5116 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014134 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014148 5116 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014159 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014174 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014187 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014200 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014213 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014225 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014239 5116 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014252 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014265 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014278 5116 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014289 5116 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014301 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014313 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014327 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014339 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014353 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014366 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014378 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014390 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014401 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014412 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014424 5116 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014435 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014449 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014460 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014472 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014485 5116 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014496 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014509 5116 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014520 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014531 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014541 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014554 5116 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014567 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014579 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014591 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014604 5116 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014617 5116 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014629 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014642 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014656 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014669 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014672 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/280f2c67-05f3-4f21-bd2d-6a22add2b93e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014682 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014787 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014834 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014860 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014887 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014916 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.014894 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015024 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015060 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015088 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015105 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015121 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015216 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015237 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015220 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015253 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015322 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015347 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015426 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015446 5116 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015467 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015486 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015506 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015526 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015545 5116 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015563 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015583 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015603 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015623 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015641 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015663 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015682 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015702 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015721 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015741 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015760 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015779 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015798 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015818 5116 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015838 5116 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015882 5116 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015902 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015925 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015943 5116 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.015988 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016008 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016025 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016044 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016064 5116 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016084 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016103 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016122 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016140 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016159 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016180 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016211 5116 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016238 5116 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016265 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016291 5116 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016317 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016340 5116 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016366 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016455 5116 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016484 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016509 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016533 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016558 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016584 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016607 5116 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016632 5116 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016655 5116 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016685 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016711 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016737 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016758 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016780 5116 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016802 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016825 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016847 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016870 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016897 5116 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016921 5116 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.016943 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017058 5116 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017087 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017111 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017136 5116 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017160 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017183 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017203 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017222 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017243 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017264 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017285 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017304 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017324 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017344 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017363 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017382 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017401 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017423 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017442 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017461 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017480 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017499 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017519 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017537 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017556 5116 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017576 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017594 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017613 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017633 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017652 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017671 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017688 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017707 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017726 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017745 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017765 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017783 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017801 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017820 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017841 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017861 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017881 5116 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017900 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017919 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.017943 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.018015 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.018036 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.018055 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.018075 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.019331 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1e98f6b-f90a-4408-b303-926b753052ff-serviceca\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.021592 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/140ab739-f0e3-4429-8e23-03782755777d-proxy-tls\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.024027 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.024104 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.024172 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.024128 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.025051 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a441b53-f957-4f01-a123-a96c637c3fe2-multus-daemon-config\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.025391 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.025513 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.026194 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.026484 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.026664 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.026816 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027033 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027183 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027188 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027689 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027870 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027903 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027933 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.027921 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.028198 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.028862 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.028903 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.028923 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.028949 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.029016 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.029052 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.029424 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbg2k\" (UniqueName: \"kubernetes.io/projected/2a441b53-f957-4f01-a123-a96c637c3fe2-kube-api-access-jbg2k\") pod \"multus-554lf\" (UID: \"2a441b53-f957-4f01-a123-a96c637c3fe2\") " pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.029628 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.032219 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.032647 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.032861 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfx5p\" (UniqueName: \"kubernetes.io/projected/c1e98f6b-f90a-4408-b303-926b753052ff-kube-api-access-dfx5p\") pod \"node-ca-26d6n\" (UID: \"c1e98f6b-f90a-4408-b303-926b753052ff\") " pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.034047 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtm56\" (UniqueName: \"kubernetes.io/projected/280f2c67-05f3-4f21-bd2d-6a22add2b93e-kube-api-access-jtm56\") pod \"multus-additional-cni-plugins-65brv\" (UID: \"280f2c67-05f3-4f21-bd2d-6a22add2b93e\") " pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.034674 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb62t\" (UniqueName: \"kubernetes.io/projected/140ab739-f0e3-4429-8e23-03782755777d-kube-api-access-cb62t\") pod \"machine-config-daemon-phdhk\" (UID: \"140ab739-f0e3-4429-8e23-03782755777d\") " pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.037546 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.037416 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.038529 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.039380 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.052363 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.062646 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.066269 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.068383 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.072154 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.073255 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.079745 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.084200 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:52 crc kubenswrapper[5116]: W1209 14:15:52.086837 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-8845d7ef86730bc1c23ebf724e6747028602de25afc6f23ca86437d95d15f8e7 WatchSource:0}: Error finding container 8845d7ef86730bc1c23ebf724e6747028602de25afc6f23ca86437d95d15f8e7: Status 404 returned error can't find the container with id 8845d7ef86730bc1c23ebf724e6747028602de25afc6f23ca86437d95d15f8e7 Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.089551 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: source /etc/kubernetes/apiserver-url.env Dec 09 14:15:52 crc kubenswrapper[5116]: else Dec 09 14:15:52 crc kubenswrapper[5116]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:15:52 crc kubenswrapper[5116]: exit 1 Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.089559 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.090366 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.090627 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.099897 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.105793 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.106156 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: source "/env/_master" Dec 09 14:15:52 crc kubenswrapper[5116]: set +o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:15:52 crc kubenswrapper[5116]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:15:52 crc kubenswrapper[5116]: ho_enable="--enable-hybrid-overlay" Dec 09 14:15:52 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:15:52 crc kubenswrapper[5116]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:15:52 crc kubenswrapper[5116]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --webhook-host=127.0.0.1 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --webhook-port=9743 \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${ho_enable} \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-interconnect \ Dec 09 14:15:52 crc kubenswrapper[5116]: --disable-approver \ Dec 09 14:15:52 crc kubenswrapper[5116]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --wait-for-kubernetes-api=200s \ Dec 09 14:15:52 crc kubenswrapper[5116]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.108642 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: source "/env/_master" Dec 09 14:15:52 crc kubenswrapper[5116]: set +o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --disable-webhook \ Dec 09 14:15:52 crc kubenswrapper[5116]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.109818 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.111405 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.119113 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.120135 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120194 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-bin\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.120223 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs podName:51843597-ba2b-4059-aa79-13887c6100f2 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:52.620203837 +0000 UTC m=+91.141948635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs") pod "network-metrics-daemon-pmt9f" (UID: "51843597-ba2b-4059-aa79-13887c6100f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120236 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-bin\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120261 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0df855a1-8389-4874-a68c-de5f76fe650a-ovn-node-metrics-cert\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120294 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120315 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120332 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mnb\" (UniqueName: \"kubernetes.io/projected/2eddb9fd-1d3a-4992-b326-9271ffb360e7-kube-api-access-n7mnb\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120347 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xt5f7\" (UniqueName: \"kubernetes.io/projected/0df855a1-8389-4874-a68c-de5f76fe650a-kube-api-access-xt5f7\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120364 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120388 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-script-lib\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120435 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-systemd\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120458 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-etc-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120474 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-node-log\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120492 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wq82j\" (UniqueName: \"kubernetes.io/projected/51843597-ba2b-4059-aa79-13887c6100f2-kube-api-access-wq82j\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120509 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2eddb9fd-1d3a-4992-b326-9271ffb360e7-tmp-dir\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120583 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-slash\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120600 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-netns\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120616 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-netd\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120638 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-var-lib-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120654 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-env-overrides\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120670 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2eddb9fd-1d3a-4992-b326-9271ffb360e7-hosts-file\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120698 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120714 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-node-log\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120724 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-kubelet\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120756 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-kubelet\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120776 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120801 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-config\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120835 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-log-socket\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120853 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmnt\" (UniqueName: \"kubernetes.io/projected/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-kube-api-access-cvmnt\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120876 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-systemd-units\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120894 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-ovn\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.120908 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-ovn-kubernetes\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.121081 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-var-lib-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.121412 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2eddb9fd-1d3a-4992-b326-9271ffb360e7-tmp-dir\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.121761 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-log-socket\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.121816 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122051 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-ovn-kubernetes\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122048 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-ovn\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122085 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122089 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-systemd-units\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122183 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122207 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122220 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122235 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122250 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122262 5116 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122274 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122287 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122299 5116 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122312 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122323 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122336 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122349 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122361 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122373 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122385 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122365 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122398 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122489 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122510 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122522 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122536 5116 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122549 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122561 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122572 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122586 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122598 5116 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122611 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122626 5116 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122639 5116 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122651 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122663 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122665 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-slash\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122675 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122688 5116 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122701 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122702 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-netns\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122714 5116 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122725 5116 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122733 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-netd\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122736 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122759 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122772 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122776 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122784 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122822 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122489 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-env-overrides\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.122868 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-systemd\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.123058 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-etc-openvswitch\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.123108 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2eddb9fd-1d3a-4992-b326-9271ffb360e7-hosts-file\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.123641 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.124021 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26d6n" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.124654 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.124764 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-script-lib\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.125791 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.126006 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-config\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.128051 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0df855a1-8389-4874-a68c-de5f76fe650a-ovn-node-metrics-cert\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.130669 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.130869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.130979 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.131070 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.131149 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.131583 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.135901 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.136680 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq82j\" (UniqueName: \"kubernetes.io/projected/51843597-ba2b-4059-aa79-13887c6100f2-kube-api-access-wq82j\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.136879 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-65brv" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.139478 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mnb\" (UniqueName: \"kubernetes.io/projected/2eddb9fd-1d3a-4992-b326-9271ffb360e7-kube-api-access-n7mnb\") pod \"node-resolver-2888f\" (UID: \"2eddb9fd-1d3a-4992-b326-9271ffb360e7\") " pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.140878 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"12f0ad2813ef0adcd78f15d6c34e4db897a66414abbb62f47ddedb664b8197e4"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.141784 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"89442cab0c53df8216563f0712de700dc46342abeda595f81a40e41c8e48c865"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.142515 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmnt\" (UniqueName: \"kubernetes.io/projected/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-kube-api-access-cvmnt\") pod \"ovnkube-control-plane-57b78d8988-w69sm\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.142903 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"8845d7ef86730bc1c23ebf724e6747028602de25afc6f23ca86437d95d15f8e7"} Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.143599 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.143664 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: source "/env/_master" Dec 09 14:15:52 crc kubenswrapper[5116]: set +o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:15:52 crc kubenswrapper[5116]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:15:52 crc kubenswrapper[5116]: ho_enable="--enable-hybrid-overlay" Dec 09 14:15:52 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:15:52 crc kubenswrapper[5116]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:15:52 crc kubenswrapper[5116]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --webhook-host=127.0.0.1 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --webhook-port=9743 \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${ho_enable} \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-interconnect \ Dec 09 14:15:52 crc kubenswrapper[5116]: --disable-approver \ Dec 09 14:15:52 crc kubenswrapper[5116]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --wait-for-kubernetes-api=200s \ Dec 09 14:15:52 crc kubenswrapper[5116]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.144818 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: source /etc/kubernetes/apiserver-url.env Dec 09 14:15:52 crc kubenswrapper[5116]: else Dec 09 14:15:52 crc kubenswrapper[5116]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:15:52 crc kubenswrapper[5116]: exit 1 Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.144924 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.145579 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: source "/env/_master" Dec 09 14:15:52 crc kubenswrapper[5116]: set +o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --disable-webhook \ Dec 09 14:15:52 crc kubenswrapper[5116]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --loglevel="${LOGLEVEL}" Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.145890 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.146002 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.147551 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:15:52 crc kubenswrapper[5116]: W1209 14:15:52.147624 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod280f2c67_05f3_4f21_bd2d_6a22add2b93e.slice/crio-f78471bff42540af5a3108c3e614ecda5005b8f2c1c7169eb83a7fce7b8e9213 WatchSource:0}: Error finding container f78471bff42540af5a3108c3e614ecda5005b8f2c1c7169eb83a7fce7b8e9213: Status 404 returned error can't find the container with id f78471bff42540af5a3108c3e614ecda5005b8f2c1c7169eb83a7fce7b8e9213 Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.147876 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt5f7\" (UniqueName: \"kubernetes.io/projected/0df855a1-8389-4874-a68c-de5f76fe650a-kube-api-access-xt5f7\") pod \"ovnkube-node-tg8rn\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.148002 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 09 14:15:52 crc kubenswrapper[5116]: while [ true ]; Dec 09 14:15:52 crc kubenswrapper[5116]: do Dec 09 14:15:52 crc kubenswrapper[5116]: for f in $(ls /tmp/serviceca); do Dec 09 14:15:52 crc kubenswrapper[5116]: echo $f Dec 09 14:15:52 crc kubenswrapper[5116]: ca_file_path="/tmp/serviceca/${f}" Dec 09 14:15:52 crc kubenswrapper[5116]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 09 14:15:52 crc kubenswrapper[5116]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 09 14:15:52 crc kubenswrapper[5116]: if [ -e "${reg_dir_path}" ]; then Dec 09 14:15:52 crc kubenswrapper[5116]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 09 14:15:52 crc kubenswrapper[5116]: else Dec 09 14:15:52 crc kubenswrapper[5116]: mkdir $reg_dir_path Dec 09 14:15:52 crc kubenswrapper[5116]: cp $ca_file_path $reg_dir_path/ca.crt Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: for d in $(ls /etc/docker/certs.d); do Dec 09 14:15:52 crc kubenswrapper[5116]: echo $d Dec 09 14:15:52 crc kubenswrapper[5116]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 09 14:15:52 crc kubenswrapper[5116]: reg_conf_path="/tmp/serviceca/${dp}" Dec 09 14:15:52 crc kubenswrapper[5116]: if [ ! -e "${reg_conf_path}" ]; then Dec 09 14:15:52 crc kubenswrapper[5116]: rm -rf /etc/docker/certs.d/$d Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: sleep 60 & wait ${!} Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfx5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-26d6n_openshift-image-registry(c1e98f6b-f90a-4408-b303-926b753052ff): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.149143 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-26d6n" podUID="c1e98f6b-f90a-4408-b303-926b753052ff" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.149652 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtm56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-65brv_openshift-multus(280f2c67-05f3-4f21-bd2d-6a22add2b93e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.150855 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-65brv" podUID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.154944 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-554lf" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.160637 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.165944 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.174388 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.180033 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 09 14:15:52 crc kubenswrapper[5116]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 09 14:15:52 crc kubenswrapper[5116]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbg2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-554lf_openshift-multus(2a441b53-f957-4f01-a123-a96c637c3fe2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.181286 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-554lf" podUID="2a441b53-f957-4f01-a123-a96c637c3fe2" Dec 09 14:15:52 crc kubenswrapper[5116]: W1209 14:15:52.183754 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod140ab739_f0e3_4429_8e23_03782755777d.slice/crio-9dc608db4014cc2ad5344a076319a225aecfd876f269a414ee00b8a5c8685fec WatchSource:0}: Error finding container 9dc608db4014cc2ad5344a076319a225aecfd876f269a414ee00b8a5c8685fec: Status 404 returned error can't find the container with id 9dc608db4014cc2ad5344a076319a225aecfd876f269a414ee00b8a5c8685fec Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.186412 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cb62t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-phdhk_openshift-machine-config-operator(140ab739-f0e3-4429-8e23-03782755777d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.188377 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2888f" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.189389 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cb62t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-phdhk_openshift-machine-config-operator(140ab739-f0e3-4429-8e23-03782755777d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.190666 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.194924 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.201012 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 09 14:15:52 crc kubenswrapper[5116]: set -uo pipefail Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 09 14:15:52 crc kubenswrapper[5116]: HOSTS_FILE="/etc/hosts" Dec 09 14:15:52 crc kubenswrapper[5116]: TEMP_FILE="/tmp/hosts.tmp" Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # Make a temporary file with the old hosts file's attributes. Dec 09 14:15:52 crc kubenswrapper[5116]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 09 14:15:52 crc kubenswrapper[5116]: echo "Failed to preserve hosts file. Exiting." Dec 09 14:15:52 crc kubenswrapper[5116]: exit 1 Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: while true; do Dec 09 14:15:52 crc kubenswrapper[5116]: declare -A svc_ips Dec 09 14:15:52 crc kubenswrapper[5116]: for svc in "${services[@]}"; do Dec 09 14:15:52 crc kubenswrapper[5116]: # Fetch service IP from cluster dns if present. We make several tries Dec 09 14:15:52 crc kubenswrapper[5116]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 09 14:15:52 crc kubenswrapper[5116]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 09 14:15:52 crc kubenswrapper[5116]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 09 14:15:52 crc kubenswrapper[5116]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:15:52 crc kubenswrapper[5116]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:15:52 crc kubenswrapper[5116]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:15:52 crc kubenswrapper[5116]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 09 14:15:52 crc kubenswrapper[5116]: for i in ${!cmds[*]} Dec 09 14:15:52 crc kubenswrapper[5116]: do Dec 09 14:15:52 crc kubenswrapper[5116]: ips=($(eval "${cmds[i]}")) Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: svc_ips["${svc}"]="${ips[@]}" Dec 09 14:15:52 crc kubenswrapper[5116]: break Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # Update /etc/hosts only if we get valid service IPs Dec 09 14:15:52 crc kubenswrapper[5116]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 09 14:15:52 crc kubenswrapper[5116]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 09 14:15:52 crc kubenswrapper[5116]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 09 14:15:52 crc kubenswrapper[5116]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 09 14:15:52 crc kubenswrapper[5116]: sleep 60 & wait Dec 09 14:15:52 crc kubenswrapper[5116]: continue Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # Append resolver entries for services Dec 09 14:15:52 crc kubenswrapper[5116]: rc=0 Dec 09 14:15:52 crc kubenswrapper[5116]: for svc in "${!svc_ips[@]}"; do Dec 09 14:15:52 crc kubenswrapper[5116]: for ip in ${svc_ips[${svc}]}; do Dec 09 14:15:52 crc kubenswrapper[5116]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ $rc -ne 0 ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: sleep 60 & wait Dec 09 14:15:52 crc kubenswrapper[5116]: continue Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 09 14:15:52 crc kubenswrapper[5116]: # Replace /etc/hosts with our modified version if needed Dec 09 14:15:52 crc kubenswrapper[5116]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 09 14:15:52 crc kubenswrapper[5116]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: sleep 60 & wait Dec 09 14:15:52 crc kubenswrapper[5116]: unset svc_ips Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7mnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-2888f_openshift-dns(2eddb9fd-1d3a-4992-b326-9271ffb360e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.202137 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-2888f" podUID="2eddb9fd-1d3a-4992-b326-9271ffb360e7" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.203213 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.211079 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.213361 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.220996 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.221050 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:15:52 crc kubenswrapper[5116]: W1209 14:15:52.223730 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df855a1_8389_4874_a68c_de5f76fe650a.slice/crio-a1ddadc499fe145d5e1ed1493e56821ebac67c8049dbd9c5aa533d6d5abb8eb0 WatchSource:0}: Error finding container a1ddadc499fe145d5e1ed1493e56821ebac67c8049dbd9c5aa533d6d5abb8eb0: Status 404 returned error can't find the container with id a1ddadc499fe145d5e1ed1493e56821ebac67c8049dbd9c5aa533d6d5abb8eb0 Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.226756 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 09 14:15:52 crc kubenswrapper[5116]: apiVersion: v1 Dec 09 14:15:52 crc kubenswrapper[5116]: clusters: Dec 09 14:15:52 crc kubenswrapper[5116]: - cluster: Dec 09 14:15:52 crc kubenswrapper[5116]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 09 14:15:52 crc kubenswrapper[5116]: server: https://api-int.crc.testing:6443 Dec 09 14:15:52 crc kubenswrapper[5116]: name: default-cluster Dec 09 14:15:52 crc kubenswrapper[5116]: contexts: Dec 09 14:15:52 crc kubenswrapper[5116]: - context: Dec 09 14:15:52 crc kubenswrapper[5116]: cluster: default-cluster Dec 09 14:15:52 crc kubenswrapper[5116]: namespace: default Dec 09 14:15:52 crc kubenswrapper[5116]: user: default-auth Dec 09 14:15:52 crc kubenswrapper[5116]: name: default-context Dec 09 14:15:52 crc kubenswrapper[5116]: current-context: default-context Dec 09 14:15:52 crc kubenswrapper[5116]: kind: Config Dec 09 14:15:52 crc kubenswrapper[5116]: preferences: {} Dec 09 14:15:52 crc kubenswrapper[5116]: users: Dec 09 14:15:52 crc kubenswrapper[5116]: - name: default-auth Dec 09 14:15:52 crc kubenswrapper[5116]: user: Dec 09 14:15:52 crc kubenswrapper[5116]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:15:52 crc kubenswrapper[5116]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:15:52 crc kubenswrapper[5116]: EOF Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xt5f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-tg8rn_openshift-ovn-kubernetes(0df855a1-8389-4874-a68c-de5f76fe650a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.227937 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.229917 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.233090 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.233130 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.233146 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.233163 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.233176 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: W1209 14:15:52.235198 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59bb7bf4_a83e_4d96_87b6_b2e4235e1620.slice/crio-0460bd7069b7358e184303b4d03cc73f2b74d738be65e783f5de66863b71bdcc WatchSource:0}: Error finding container 0460bd7069b7358e184303b4d03cc73f2b74d738be65e783f5de66863b71bdcc: Status 404 returned error can't find the container with id 0460bd7069b7358e184303b4d03cc73f2b74d738be65e783f5de66863b71bdcc Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.236787 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 09 14:15:52 crc kubenswrapper[5116]: set -euo pipefail Dec 09 14:15:52 crc kubenswrapper[5116]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 09 14:15:52 crc kubenswrapper[5116]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 09 14:15:52 crc kubenswrapper[5116]: # As the secret mount is optional we must wait for the files to be present. Dec 09 14:15:52 crc kubenswrapper[5116]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 09 14:15:52 crc kubenswrapper[5116]: TS=$(date +%s) Dec 09 14:15:52 crc kubenswrapper[5116]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 09 14:15:52 crc kubenswrapper[5116]: HAS_LOGGED_INFO=0 Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: log_missing_certs(){ Dec 09 14:15:52 crc kubenswrapper[5116]: CUR_TS=$(date +%s) Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 09 14:15:52 crc kubenswrapper[5116]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 09 14:15:52 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 09 14:15:52 crc kubenswrapper[5116]: HAS_LOGGED_INFO=1 Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: } Dec 09 14:15:52 crc kubenswrapper[5116]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 09 14:15:52 crc kubenswrapper[5116]: log_missing_certs Dec 09 14:15:52 crc kubenswrapper[5116]: sleep 5 Dec 09 14:15:52 crc kubenswrapper[5116]: done Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/kube-rbac-proxy \ Dec 09 14:15:52 crc kubenswrapper[5116]: --logtostderr \ Dec 09 14:15:52 crc kubenswrapper[5116]: --secure-listen-address=:9108 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 09 14:15:52 crc kubenswrapper[5116]: --upstream=http://127.0.0.1:29108/ \ Dec 09 14:15:52 crc kubenswrapper[5116]: --tls-private-key-file=${TLS_PK} \ Dec 09 14:15:52 crc kubenswrapper[5116]: --tls-cert-file=${TLS_CERT} Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvmnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w69sm_openshift-ovn-kubernetes(59bb7bf4-a83e-4d96-87b6-b2e4235e1620): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.237975 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.239487 5116 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:15:52 crc kubenswrapper[5116]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ -f "/env/_master" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: set -o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: source "/env/_master" Dec 09 14:15:52 crc kubenswrapper[5116]: set +o allexport Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "" != "" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # This is needed so that converting clusters from GA to TP Dec 09 14:15:52 crc kubenswrapper[5116]: # will rollout control plane pods as well Dec 09 14:15:52 crc kubenswrapper[5116]: network_segmentation_enabled_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: multi_network_enabled_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "true" != "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: route_advertisements_enable_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # Enable multi-network policy if configured (control-plane always full mode) Dec 09 14:15:52 crc kubenswrapper[5116]: multi_network_policy_enabled_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "false" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: # Enable admin network policy if configured (control-plane always full mode) Dec 09 14:15:52 crc kubenswrapper[5116]: admin_network_policy_enabled_flag= Dec 09 14:15:52 crc kubenswrapper[5116]: if [[ "true" == "true" ]]; then Dec 09 14:15:52 crc kubenswrapper[5116]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: if [ "shared" == "shared" ]; then Dec 09 14:15:52 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode shared" Dec 09 14:15:52 crc kubenswrapper[5116]: elif [ "shared" == "local" ]; then Dec 09 14:15:52 crc kubenswrapper[5116]: gateway_mode_flags="--gateway-mode local" Dec 09 14:15:52 crc kubenswrapper[5116]: else Dec 09 14:15:52 crc kubenswrapper[5116]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 09 14:15:52 crc kubenswrapper[5116]: exit 1 Dec 09 14:15:52 crc kubenswrapper[5116]: fi Dec 09 14:15:52 crc kubenswrapper[5116]: Dec 09 14:15:52 crc kubenswrapper[5116]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 09 14:15:52 crc kubenswrapper[5116]: exec /usr/bin/ovnkube \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-interconnect \ Dec 09 14:15:52 crc kubenswrapper[5116]: --init-cluster-manager "${K8S_NODE}" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 09 14:15:52 crc kubenswrapper[5116]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --metrics-bind-address "127.0.0.1:29108" \ Dec 09 14:15:52 crc kubenswrapper[5116]: --metrics-enable-pprof \ Dec 09 14:15:52 crc kubenswrapper[5116]: --metrics-enable-config-duration \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${ovn_v4_join_subnet_opt} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${ovn_v6_join_subnet_opt} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${dns_name_resolver_enabled_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${persistent_ips_enabled_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${multi_network_enabled_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${network_segmentation_enabled_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${gateway_mode_flags} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${route_advertisements_enable_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${preconfigured_udn_addresses_enable_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-egress-ip=true \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-egress-firewall=true \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-egress-qos=true \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-egress-service=true \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-multicast \ Dec 09 14:15:52 crc kubenswrapper[5116]: --enable-multi-external-gateway=true \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${multi_network_policy_enabled_flag} \ Dec 09 14:15:52 crc kubenswrapper[5116]: ${admin_network_policy_enabled_flag} Dec 09 14:15:52 crc kubenswrapper[5116]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvmnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w69sm_openshift-ovn-kubernetes(59bb7bf4-a83e-4d96-87b6-b2e4235e1620): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:15:52 crc kubenswrapper[5116]: > logger="UnhandledError" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.241051 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.244903 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.258680 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.298360 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.335188 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.335717 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.335766 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.335781 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.335802 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.335817 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.378009 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.418885 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.425368 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.425436 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.425518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.425563 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425573 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425594 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425607 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425665 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:53.425646895 +0000 UTC m=+91.947391693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425733 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425742 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425757 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425777 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425844 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:53.42581611 +0000 UTC m=+91.947560938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425786 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.425906 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:53.425877421 +0000 UTC m=+91.947622259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.426034 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:53.426011405 +0000 UTC m=+91.947756233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.438314 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.438350 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.438362 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.438380 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.438392 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.460414 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.502173 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.527204 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.527416 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:15:53.527373042 +0000 UTC m=+92.049117890 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.535018 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.540621 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.540708 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.540733 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.540767 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.540790 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.581355 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.619750 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.629069 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.629359 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: E1209 14:15:52.629476 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs podName:51843597-ba2b-4059-aa79-13887c6100f2 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:53.629451568 +0000 UTC m=+92.151196396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs") pod "network-metrics-daemon-pmt9f" (UID: "51843597-ba2b-4059-aa79-13887c6100f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.645703 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.645787 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.645808 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.645832 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.645850 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.668932 5116 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.749240 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.749322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.749349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.749381 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.749406 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.851703 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.851769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.851790 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.851820 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.851842 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.957732 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.957789 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.957805 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.957826 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:52 crc kubenswrapper[5116]: I1209 14:15:52.957841 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:52Z","lastTransitionTime":"2025-12-09T14:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.059727 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.059772 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.059785 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.059803 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.059814 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.147507 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2888f" event={"ID":"2eddb9fd-1d3a-4992-b326-9271ffb360e7","Type":"ContainerStarted","Data":"35ec83684551f8fbbd09575e941fe5b05f940c33adb63b8546c2c438952d81d6"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.149371 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"9dc608db4014cc2ad5344a076319a225aecfd876f269a414ee00b8a5c8685fec"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.151107 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-554lf" event={"ID":"2a441b53-f957-4f01-a123-a96c637c3fe2","Type":"ContainerStarted","Data":"0b52e9681240afa79ece01d008ab4354b532de33b2edd4867ebb066e6eed807d"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.153094 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerStarted","Data":"f78471bff42540af5a3108c3e614ecda5005b8f2c1c7169eb83a7fce7b8e9213"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.156015 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" event={"ID":"59bb7bf4-a83e-4d96-87b6-b2e4235e1620","Type":"ContainerStarted","Data":"0460bd7069b7358e184303b4d03cc73f2b74d738be65e783f5de66863b71bdcc"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.158379 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"a1ddadc499fe145d5e1ed1493e56821ebac67c8049dbd9c5aa533d6d5abb8eb0"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.162648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.162683 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.162697 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.162712 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.162723 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.163515 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26d6n" event={"ID":"c1e98f6b-f90a-4408-b303-926b753052ff","Type":"ContainerStarted","Data":"21aef015f8cb2405088e89451f5352f3e8836bca1fa7697dd691fe31c07a0e51"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.175398 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.196483 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.208649 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.220805 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.230408 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.245686 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.254693 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.263762 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.265319 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.265394 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.265420 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.265452 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.265477 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.274981 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.288688 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.300649 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.312384 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.321672 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.333633 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.345866 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.356091 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.367615 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.372465 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.372523 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.372536 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.372559 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.372572 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.375511 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.403135 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.434713 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.439528 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.439590 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.439625 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.439646 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.439670 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.439739 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:55.439722689 +0000 UTC m=+93.961467487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.439756 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440103 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440131 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440142 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440169 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:55.440162501 +0000 UTC m=+93.961907299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440224 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440249 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:55.440242963 +0000 UTC m=+93.961987761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.439773 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440756 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.440871 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:55.440827238 +0000 UTC m=+93.962572106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.475492 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.475533 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.475541 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.475554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.475563 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.485886 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.518902 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.541240 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.541438 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:15:55.541423276 +0000 UTC m=+94.063168064 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.561932 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.578526 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.578580 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.578593 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.578609 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.578619 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.595185 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.636523 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.641799 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.641941 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.642051 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs podName:51843597-ba2b-4059-aa79-13887c6100f2 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:55.642032333 +0000 UTC m=+94.163777131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs") pod "network-metrics-daemon-pmt9f" (UID: "51843597-ba2b-4059-aa79-13887c6100f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.675166 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.680609 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.680642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.680651 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.680666 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.680677 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.715087 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.748493 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.748536 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.748599 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.748492 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.748704 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.748731 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.748798 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:15:53 crc kubenswrapper[5116]: E1209 14:15:53.748877 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.752679 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.753797 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.754241 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.755462 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.756663 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.759023 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.760709 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.762361 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.763923 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.768395 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.769667 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.771458 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.772537 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.773594 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.774725 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.775651 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.776366 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.777108 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.778749 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.779711 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.781370 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.782809 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.782890 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.782907 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.782924 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.782936 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.783014 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.785201 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.786029 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.787445 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.788239 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.790556 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.791763 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.793489 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.794165 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.796221 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.796868 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.798589 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.799530 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.802195 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.803846 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.804884 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.806423 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.808022 5116 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.808139 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.812940 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.815898 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.818093 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.819475 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.820180 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.821721 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.823186 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.824644 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.825702 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.827346 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.828454 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.831755 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.832775 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.833553 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.834456 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.835546 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.837855 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.842564 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.844102 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.845425 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.859623 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.888210 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.888255 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.888266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.888281 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.888293 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.891646 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.916545 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.957398 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.990869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.990905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.990915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.990927 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.990936 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:53Z","lastTransitionTime":"2025-12-09T14:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:53 crc kubenswrapper[5116]: I1209 14:15:53.996469 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.039109 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.079064 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.092414 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.092470 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.092483 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.092501 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.092514 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.115577 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.144867 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.144949 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.145001 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.145029 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.145052 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.157362 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: E1209 14:15:54.162171 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.165739 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.165781 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.165792 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.165819 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.166543 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.167622 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2888f" event={"ID":"2eddb9fd-1d3a-4992-b326-9271ffb360e7","Type":"ContainerStarted","Data":"4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.169734 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.169797 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.171695 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-554lf" event={"ID":"2a441b53-f957-4f01-a123-a96c637c3fe2","Type":"ContainerStarted","Data":"e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.173442 5116 generic.go:358] "Generic (PLEG): container finished" podID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" containerID="85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53" exitCode=0 Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.173540 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerDied","Data":"85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.176109 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" event={"ID":"59bb7bf4-a83e-4d96-87b6-b2e4235e1620","Type":"ContainerStarted","Data":"a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.176162 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" event={"ID":"59bb7bf4-a83e-4d96-87b6-b2e4235e1620","Type":"ContainerStarted","Data":"e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.178309 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" exitCode=0 Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.178383 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.180734 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26d6n" event={"ID":"c1e98f6b-f90a-4408-b303-926b753052ff","Type":"ContainerStarted","Data":"064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d"} Dec 09 14:15:54 crc kubenswrapper[5116]: E1209 14:15:54.182658 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.186243 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.186305 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.186325 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.186348 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.186366 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.197448 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: E1209 14:15:54.201156 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.204868 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.204900 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.204911 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.204927 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.204937 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: E1209 14:15:54.218123 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.222971 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.223035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.223049 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.223068 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.223082 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: E1209 14:15:54.235654 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: E1209 14:15:54.235765 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.236886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.236913 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.236923 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.236940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.236968 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.239767 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.277109 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.316624 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.338937 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.339002 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.339016 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.339033 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.339048 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.356759 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.394879 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.441406 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.441586 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.441677 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.441768 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.441835 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.445794 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.486502 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.526057 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.544328 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.544382 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.544395 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.544415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.544427 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.556340 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.604346 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.643156 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.646667 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.646720 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.646732 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.646746 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.646761 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.676480 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.717497 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.748781 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.748854 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.748884 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.748917 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.748943 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.756316 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.796656 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.839746 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.850296 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.850340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.850350 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.850366 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.850384 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.877544 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.917995 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.952182 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.952248 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.952269 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.952293 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.952312 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:54Z","lastTransitionTime":"2025-12-09T14:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.958389 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:54 crc kubenswrapper[5116]: I1209 14:15:54.996153 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.037507 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.054756 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.054804 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.054819 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.054839 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.054856 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.078614 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.118277 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.157038 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.157101 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.157111 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.157140 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.157154 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.161574 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.188346 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.188401 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.188416 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.188427 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.188438 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.189980 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerDied","Data":"f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.189949 5116 generic.go:358] "Generic (PLEG): container finished" podID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" containerID="f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3" exitCode=0 Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.197893 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.256667 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.259237 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.259290 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.259303 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.259322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.259334 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.283785 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.320343 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.360180 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.361461 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.361515 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.361530 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.361549 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.361562 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.396151 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.435687 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.459157 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.459227 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459264 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.459279 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.459328 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459349 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:59.45932547 +0000 UTC m=+97.981070288 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459433 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459490 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:59.459473474 +0000 UTC m=+97.981218282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459489 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459518 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459535 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459546 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459581 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459602 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459584 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:59.459567776 +0000 UTC m=+97.981312594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.459689 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:59.459663999 +0000 UTC m=+97.981408837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.463393 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.463425 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.463438 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.463457 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.463478 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.475534 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.517972 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.561133 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.561442 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:15:59.561398366 +0000 UTC m=+98.083143194 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.562649 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.565080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.565137 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.565155 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.565180 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.565199 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.600721 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.638107 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.662940 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.663287 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.663394 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs podName:51843597-ba2b-4059-aa79-13887c6100f2 nodeName:}" failed. No retries permitted until 2025-12-09 14:15:59.6633642 +0000 UTC m=+98.185109038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs") pod "network-metrics-daemon-pmt9f" (UID: "51843597-ba2b-4059-aa79-13887c6100f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.668453 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.668521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.668550 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.668591 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.668629 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.679437 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.721008 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.748051 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.748054 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.748273 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.748075 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.748078 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.748446 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.748719 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:15:55 crc kubenswrapper[5116]: E1209 14:15:55.748790 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.761897 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.771610 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.771722 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.771782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.771812 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.771833 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.800844 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.837546 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.874414 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.874478 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.874536 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.874558 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.874568 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.878364 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.921437 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.961025 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.977178 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.977223 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.977235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.977253 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.977266 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:55Z","lastTransitionTime":"2025-12-09T14:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:55 crc kubenswrapper[5116]: I1209 14:15:55.998184 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.046334 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.077298 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.078598 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.078636 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.078648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.078665 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.078677 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.119090 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.160872 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.180390 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.180461 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.180485 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.180516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.180539 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.195777 5116 generic.go:358] "Generic (PLEG): container finished" podID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" containerID="879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0" exitCode=0 Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.195907 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerDied","Data":"879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.199336 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.201707 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.243110 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.278972 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.282678 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.282742 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.282762 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.282787 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.282807 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.335612 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.361441 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.385597 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.385644 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.385653 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.385669 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.385679 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.410625 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.438879 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.476530 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.488547 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.488624 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.488650 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.488686 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.488711 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.522361 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.561190 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.590761 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.590819 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.590831 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.590849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.590862 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.604664 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.642212 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.680008 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.692897 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.692985 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.693008 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.693035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.693053 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.722320 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.757408 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.795170 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.795367 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.795456 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.795517 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.795581 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.804146 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.837491 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.883609 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.898780 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.898844 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.898861 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.898884 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.898898 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:56Z","lastTransitionTime":"2025-12-09T14:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.922160 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.977486 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:56 crc kubenswrapper[5116]: I1209 14:15:56.999330 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.000581 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.000619 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.000628 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.000642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.000650 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.039516 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.076528 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.102670 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.102735 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.102753 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.102773 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.102785 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.115262 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.157501 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.198181 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.204469 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.204521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.204535 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.204554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.204569 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.207360 5116 generic.go:358] "Generic (PLEG): container finished" podID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" containerID="8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da" exitCode=0 Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.207448 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerDied","Data":"8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.240424 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.287763 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.307342 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.307378 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.307388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.307403 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.307413 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.315474 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.362587 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.394873 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.409460 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.409977 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.410001 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.410022 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.410035 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.434706 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.478233 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.511736 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.511808 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.511827 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.511851 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.511874 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.515607 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.555425 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.596258 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.613668 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.613712 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.613723 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.613738 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.613748 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.636177 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.683489 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.715128 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.715173 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.715185 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.715200 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.715211 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.716891 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.747763 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:57 crc kubenswrapper[5116]: E1209 14:15:57.747861 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.747996 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.748046 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:57 crc kubenswrapper[5116]: E1209 14:15:57.748192 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.748262 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:57 crc kubenswrapper[5116]: E1209 14:15:57.748317 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:15:57 crc kubenswrapper[5116]: E1209 14:15:57.748374 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.758217 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.800580 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.817249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.817313 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.817328 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.817349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.817375 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.836815 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.880530 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.917725 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.919341 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.919406 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.919426 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.919452 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.919473 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:57Z","lastTransitionTime":"2025-12-09T14:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:57 crc kubenswrapper[5116]: I1209 14:15:57.974206 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.022697 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.022760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.022780 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.022804 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.022822 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.125234 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.125281 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.125300 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.125323 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.125340 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.215476 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.218224 5116 generic.go:358] "Generic (PLEG): container finished" podID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" containerID="4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03" exitCode=0 Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.218274 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerDied","Data":"4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.227144 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.227185 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.227202 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.227225 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.227242 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.233195 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.246260 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.265134 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.279862 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.295614 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.309216 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.321738 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.336721 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.337266 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.337305 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.337318 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.337334 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.337346 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.345647 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.356774 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.395369 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.439069 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.439108 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.439118 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.439135 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.439145 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.444372 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.477600 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.522563 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.541291 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.541336 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.541368 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.541385 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.541398 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.560667 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.600746 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.637218 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.643915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.644001 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.644021 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.644043 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.644061 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.678734 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.715164 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.746383 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.746420 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.746432 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.746450 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.746461 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.848672 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.848736 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.848760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.848784 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.848802 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.952144 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.952216 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.952243 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.952274 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:58 crc kubenswrapper[5116]: I1209 14:15:58.952296 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:58Z","lastTransitionTime":"2025-12-09T14:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.055643 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.055727 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.055754 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.055787 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.055814 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.158316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.158400 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.158428 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.158461 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.158488 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.227728 5116 generic.go:358] "Generic (PLEG): container finished" podID="280f2c67-05f3-4f21-bd2d-6a22add2b93e" containerID="fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c" exitCode=0 Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.227827 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerDied","Data":"fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.249194 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.261213 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.261313 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.261335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.261361 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.261380 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.283020 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.299047 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.309805 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.319433 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.328676 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.337864 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.346214 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.357619 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.364251 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.364326 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.364346 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.364377 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.364396 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.369022 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.379195 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.393478 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.404047 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.414907 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.432117 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.442777 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.456582 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.467389 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.467441 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.467463 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.467488 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.467461 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.467506 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.492315 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.509285 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.509346 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.509392 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.509440 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509647 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509715 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:07.509693328 +0000 UTC m=+106.031438146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509768 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509809 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:07.509796941 +0000 UTC m=+106.031541759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509897 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509915 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509930 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.509993 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:07.509979916 +0000 UTC m=+106.031724734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.510069 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.510085 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.510097 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.510133 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:07.51012264 +0000 UTC m=+106.031867448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.569911 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.570392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.570414 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.570438 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.570456 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.611076 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.611325 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:07.611293242 +0000 UTC m=+106.133038080 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.672819 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.672886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.672915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.673000 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.673027 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.713176 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.713399 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.713510 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs podName:51843597-ba2b-4059-aa79-13887c6100f2 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:07.713483961 +0000 UTC m=+106.235228799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs") pod "network-metrics-daemon-pmt9f" (UID: "51843597-ba2b-4059-aa79-13887c6100f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.747791 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.747836 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.747825 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.748068 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.748159 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.748182 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.748338 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:15:59 crc kubenswrapper[5116]: E1209 14:15:59.748447 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.776385 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.776422 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.776432 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.776448 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.776462 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.879037 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.879087 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.879099 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.879116 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.879128 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.981823 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.981871 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.981882 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.981897 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:15:59 crc kubenswrapper[5116]: I1209 14:15:59.981908 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:15:59Z","lastTransitionTime":"2025-12-09T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.085130 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.085220 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.085248 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.085280 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.085306 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.188228 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.188297 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.188323 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.188356 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.188378 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.291104 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.291170 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.291195 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.291225 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.291250 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.393893 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.393944 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.394055 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.394078 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.394090 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.496744 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.496798 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.496810 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.496826 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.496838 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.599245 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.599330 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.599358 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.599391 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.599417 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.703171 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.703225 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.703244 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.703263 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.703277 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.806135 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.806226 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.806253 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.806278 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.806296 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.909175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.909228 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.909245 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.909264 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:00 crc kubenswrapper[5116]: I1209 14:16:00.909277 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:00Z","lastTransitionTime":"2025-12-09T14:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.011817 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.011859 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.011869 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.011882 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.011892 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.114088 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.114157 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.114175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.114200 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.114218 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.216623 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.216677 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.216689 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.216710 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.216723 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.241257 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerStarted","Data":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.248391 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" event={"ID":"280f2c67-05f3-4f21-bd2d-6a22add2b93e","Type":"ContainerStarted","Data":"16f6db5c0970ecffe13e8a449f35727b141a66af59032b3c952dab43f8c1ecbb"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.260293 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.276901 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.288689 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.304187 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.315529 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.320365 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.320447 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.320469 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.320500 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.320525 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.338221 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.350890 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.389806 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.422697 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.422736 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.422745 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.422758 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.422767 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.425489 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.440249 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.448350 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.456249 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.464526 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.471725 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.480389 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.492459 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.504649 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.516791 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.524846 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.524893 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.524905 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.524920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.524931 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.526218 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.541387 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.561092 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.583382 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.594292 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.605393 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.614356 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.623911 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.627249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.627302 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.627320 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.627337 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.627349 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.632803 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.642677 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.654849 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.667531 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.679074 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.693908 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://16f6db5c0970ecffe13e8a449f35727b141a66af59032b3c952dab43f8c1ecbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.706702 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.717556 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.729204 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.729278 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.729297 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.729319 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.729362 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.735522 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.744623 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.747885 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:01 crc kubenswrapper[5116]: E1209 14:16:01.748133 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.748172 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.748263 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:01 crc kubenswrapper[5116]: E1209 14:16:01.748434 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:16:01 crc kubenswrapper[5116]: E1209 14:16:01.748568 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.748596 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:01 crc kubenswrapper[5116]: E1209 14:16:01.748745 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.756678 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.763659 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.774728 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.786350 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.798042 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.807334 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.816409 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.825481 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.830681 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.830750 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.830768 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.830784 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.830795 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.836686 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.848497 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.856300 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.869569 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://16f6db5c0970ecffe13e8a449f35727b141a66af59032b3c952dab43f8c1ecbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.877820 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.889020 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.902114 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.911364 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.921311 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.928243 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.936360 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.936406 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.936419 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.936439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.936452 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:01Z","lastTransitionTime":"2025-12-09T14:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.942900 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.955460 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:01 crc kubenswrapper[5116]: I1209 14:16:01.975674 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.038768 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.038804 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.038813 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.038826 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.038834 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.140102 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.140383 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.140391 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.140403 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.140411 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.242415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.242480 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.242504 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.242533 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.242558 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.253055 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.253135 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.253162 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.291098 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.291254 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.308517 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.327406 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.341450 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.344695 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.344763 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.344787 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.344810 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.344831 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.358706 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.372067 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.398126 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.413311 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.433008 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.446295 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.447761 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.447821 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.447845 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.447875 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.447901 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.463000 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.476937 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.490552 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.503071 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.512700 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.525082 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.538393 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.550150 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.550231 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.550250 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.550591 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.550636 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.550893 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.570547 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://16f6db5c0970ecffe13e8a449f35727b141a66af59032b3c952dab43f8c1ecbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.581184 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.593861 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.607098 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.637080 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"140ab739-f0e3-4429-8e23-03782755777d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9edcc311fd26c47ab2cf5ae0f797b84a59846b62a81ea33be72b3808994051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb62t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-phdhk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.652570 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.652657 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.652684 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.652709 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.652727 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.674923 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2888f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eddb9fd-1d3a-4992-b326-9271ffb360e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d53608139e6bcaa34cd239c5956698f0e7193e59e00bec92e9bc3a13c12dc3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7mnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2888f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.716766 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51843597-ba2b-4059-aa79-13887c6100f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wq82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmt9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.755300 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.755356 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.755375 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.755397 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.755415 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.761476 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766c91a8-03e2-444b-85e7-9000190ac3d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6039b6b301ca2b0d17876a2c8a3d261aa1e7aaf7ee514673351ba499e7d46e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede87517cbf886d0035dd5a12d24f971471118275ad3606cefe648fc069e270a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a710213daf6e513b0205e94ee4d79ff2df774b5fe1e794b0fa72c0300cd9dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4fff1930d95cd8e1d7990c406d8c5dfb8386a898d6c5f75cd3f195423e6cba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.798221 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.839804 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.857387 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.857460 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.857482 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.857507 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.857526 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.880647 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-65brv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280f2c67-05f3-4f21-bd2d-6a22add2b93e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://16f6db5c0970ecffe13e8a449f35727b141a66af59032b3c952dab43f8c1ecbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d74f0e78ff10ac95b07e5b53ee8b31fd34845b3d1c7dc7b11b5953ff302f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24c19d78a715b780faf2ed62f2cdadae328faee8479d30322d58e42b848d1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879b45714cc01bc3f373f3e8e443d7b8b28d31d266d180dc4005c1997b02d9e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333c612889944e499f1aaf763d7356a9c74af9f1bf1d523f2788048288f03da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af9308554255a91f35696279f014743fea75eaa0d0ae9cf9f61b836e1e40d03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd78447ccdaa66415d75ba86fe837f42e7e31c7b998d066547fc7f45090ee42c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtm56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-65brv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.919771 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.956154 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.959796 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.959837 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.959848 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.959863 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.959873 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:02Z","lastTransitionTime":"2025-12-09T14:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:02 crc kubenswrapper[5116]: I1209 14:16:02.999110 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.034177 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.062304 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.062382 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.062407 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.062439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.062464 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.081011 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.117205 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.165334 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.165404 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.165424 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.165449 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.165468 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.171127 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.205041 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.253024 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.267672 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.267769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.267789 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.267812 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.267830 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.281206 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.369891 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.369992 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.370006 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.370023 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.370035 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.473031 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.473112 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.473133 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.473160 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.473180 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.575898 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.576012 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.576038 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.576069 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.576092 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.678695 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.678742 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.678753 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.678769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.678779 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.748020 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.748108 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:03 crc kubenswrapper[5116]: E1209 14:16:03.748246 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:16:03 crc kubenswrapper[5116]: E1209 14:16:03.748348 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.748503 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.748546 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:03 crc kubenswrapper[5116]: E1209 14:16:03.748753 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:16:03 crc kubenswrapper[5116]: E1209 14:16:03.749016 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.781042 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.781097 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.781115 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.781142 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.781162 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.884413 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.884500 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.884516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.884542 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.884561 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.986824 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.986914 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.986931 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.986950 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:03 crc kubenswrapper[5116]: I1209 14:16:03.986984 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:03Z","lastTransitionTime":"2025-12-09T14:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.089484 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.089558 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.089577 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.089602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.089623 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.192580 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.192706 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.192732 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.192755 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.192773 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.295193 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.295271 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.295290 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.295315 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.295333 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.397304 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.397760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.397831 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.397857 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.397877 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.417790 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.417868 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.417888 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.417915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.417933 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.439344 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.444452 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.444593 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.444677 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.444773 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.444857 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.463105 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.467852 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.467926 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.467944 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.467997 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.468017 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.480985 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.485880 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.485933 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.485946 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.485986 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.485999 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.500218 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.505888 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.505951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.506005 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.506030 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.506045 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.519227 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:16:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7322bcb2-516b-419d-9115-e535bc272977\\\",\\\"systemUUID\\\":\\\"d07826d4-dd25-445c-8d20-f1545448b405\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.519497 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.521764 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.521839 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.521862 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.521890 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.521911 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.625807 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.625876 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.625888 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.625908 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.625923 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.728804 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.728894 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.728910 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.728944 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.728991 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.748899 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:16:04 crc kubenswrapper[5116]: E1209 14:16:04.749183 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.832062 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.832111 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.832126 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.832175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.832212 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.935301 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.935368 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.935386 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.935415 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:04 crc kubenswrapper[5116]: I1209 14:16:04.935461 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:04Z","lastTransitionTime":"2025-12-09T14:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.038825 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.039534 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.039636 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.039753 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.039832 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.142374 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.142552 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.142583 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.142616 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.142642 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.246979 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.247682 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.247728 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.247759 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.247780 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.350494 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.350551 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.350576 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.350627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.350662 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.453467 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.453587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.453614 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.453642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.453665 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.556244 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.556324 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.556343 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.556371 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.556389 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.659271 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.659347 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.659369 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.659394 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.659411 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.748656 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.748933 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:05 crc kubenswrapper[5116]: E1209 14:16:05.749154 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.749164 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:05 crc kubenswrapper[5116]: E1209 14:16:05.749321 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:16:05 crc kubenswrapper[5116]: E1209 14:16:05.749405 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.749505 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:05 crc kubenswrapper[5116]: E1209 14:16:05.749792 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.766889 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.767175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.767238 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.767332 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.767360 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.870624 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.870716 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.870732 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.870755 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.870773 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.973835 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.974212 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.974407 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.974573 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:05 crc kubenswrapper[5116]: I1209 14:16:05.974787 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:05Z","lastTransitionTime":"2025-12-09T14:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.112609 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.112668 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.112692 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.112721 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.112746 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.215567 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.215995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.216012 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.216031 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.216044 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.296063 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4a649018cce9afc10d5e5ec12fd5ae800c1e3acad78a801271067cbe8e14bdb2"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.296123 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"408615e35dad01268867a6c927890b3592c9d3b8669c5d89ce57143e072b71a4"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.310270 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvmnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w69sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.317445 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.317482 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.317548 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.317565 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.317578 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.323248 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-554lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a441b53-f957-4f01-a123-a96c637c3fe2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbg2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-554lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.333922 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0de8dae5-713f-4957-8a54-86af74b43f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e9165125badb66babadbf350562a57cc7845f2f9c560915487a4df4100a64ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0ba0ace4620a2f1685bdf8f87c6105313a0cd80598ce57fe829866c33f5873c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7aa2ec21fa3672a782842012ac999b12f27b4d7179a55c14c2ccd959a3b499b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.342697 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b5b49e-8b59-4753-92f3-1e20a4b5db80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89574910953aa1dd4b6e83343767cf7793dc2b56746b65ea6e937594b878a4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1090a28d009a6ee759c343e0b3ebbf3ac3d10396ce30c8c4eefc4c607d18cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.353019 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.361392 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26d6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e98f6b-f90a-4408-b303-926b753052ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://064e2fc865b576b303cb5db2958eb7ca8f6d4fb296484fdb4fce7827e13e860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfx5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26d6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.375419 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df855a1-8389-4874-a68c-de5f76fe650a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:15:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:16:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:15:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xt5f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:15:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tg8rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.388905 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"015770e1-5eef-4f29-9f60-2798e4e1ed27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:15:30Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:15:29.831059 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:15:29.831287 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:15:29.832635 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3816721944/tls.crt::/tmp/serving-cert-3816721944/tls.key\\\\\\\"\\\\nI1209 14:15:30.063726 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:15:30.066182 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:15:30.066238 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:15:30.066316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:15:30.066352 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:15:30.071181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:15:30.071235 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071247 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:15:30.071256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:15:30.071262 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:15:30.071268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:15:30.071275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:15:30.071204 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:15:30.073102 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:15:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.409053 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc6d4a1-d6b4-44b0-8a26-1fdec2f89e93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:14:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://43141db7a27f1b39127559b13744bc63624f31a877cd4eaf676495021e926960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://55cdef5d5b7685eef665a4723acd93400d6af723c37eaa4f7a5c190ba3814f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fde328260a5e51d3ec6081b7679082ed04881e61f5211475a1e1a876c017dc66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52c5521ce5b2cc94db5177dc5e385dceb583e010bcd7f2dc2ca4d895654696fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8106be0ddfdaa4d810c676c2fc3fa3bded5543b164b780d6ad0b393eb4aea95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:14:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed1c7ffb73a45deda41f24c0dedbd44de2dd8278e6db5a9ddbfd6552f8e9873\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://091b97a70f7ef418581e3d619fec78f4baa4e94dc32a63aeba224811cdf9757c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8293f579525ee0591908e1672a53d094dbc8cb0ab347f253be765de36c0642ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:14:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:14:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.419388 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.419432 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.419445 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.419464 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.419476 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.481070 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podStartSLOduration=85.481052913 podStartE2EDuration="1m25.481052913s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:07.48094396 +0000 UTC m=+106.002688798" watchObservedRunningTime="2025-12-09 14:16:07.481052913 +0000 UTC m=+106.002797711" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.494182 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2888f" podStartSLOduration=85.494154642 podStartE2EDuration="1m25.494154642s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:07.493890725 +0000 UTC m=+106.015635523" watchObservedRunningTime="2025-12-09 14:16:07.494154642 +0000 UTC m=+106.015899440" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.523793 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.523839 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.523853 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.523870 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.523884 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.543758 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=16.543725681 podStartE2EDuration="16.543725681s" podCreationTimestamp="2025-12-09 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:07.528137336 +0000 UTC m=+106.049882144" watchObservedRunningTime="2025-12-09 14:16:07.543725681 +0000 UTC m=+106.065470519" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.575831 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-65brv" podStartSLOduration=85.575805554 podStartE2EDuration="1m25.575805554s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:07.575332662 +0000 UTC m=+106.097077470" watchObservedRunningTime="2025-12-09 14:16:07.575805554 +0000 UTC m=+106.097550382" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.609564 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.609612 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.609648 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.609680 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609744 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609791 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609857 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609897 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609916 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609946 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.610055 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.610079 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.609868 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.60984221 +0000 UTC m=+122.131587048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.610177 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.610157699 +0000 UTC m=+122.131902537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.610204 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.610191559 +0000 UTC m=+122.131936397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.610226 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.61021412 +0000 UTC m=+122.131958958 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.625635 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.625688 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.625705 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.625727 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.625743 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.711051 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.711343 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.71130979 +0000 UTC m=+122.233054598 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.728560 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.728612 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.728638 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.728659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.728671 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.748182 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.748310 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.748178 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.748413 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.748555 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.748650 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.748543 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.748857 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.812438 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.812775 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: E1209 14:16:07.812947 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs podName:51843597-ba2b-4059-aa79-13887c6100f2 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.812913954 +0000 UTC m=+122.334658792 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs") pod "network-metrics-daemon-pmt9f" (UID: "51843597-ba2b-4059-aa79-13887c6100f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.831209 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.831287 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.831313 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.831345 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.831370 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.934765 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.934815 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.934830 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.934849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:07 crc kubenswrapper[5116]: I1209 14:16:07.934862 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:07Z","lastTransitionTime":"2025-12-09T14:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.037006 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.037055 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.037067 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.037086 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.037103 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.139591 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.139642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.139657 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.139676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.139689 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.241588 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.241627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.241637 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.241652 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.241664 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.300357 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"c9840fbbe37341fb652c26146f6d3c94badc9204f7264c53b11b1adbcf3b1212"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.301558 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"215fec855ec8dd6fb30d8089cdd2171f85da62cd1e74d33276437d03c9c94ba5"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.329078 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-26d6n" podStartSLOduration=86.329061408 podStartE2EDuration="1m26.329061408s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.328900254 +0000 UTC m=+106.850645052" watchObservedRunningTime="2025-12-09 14:16:08.329061408 +0000 UTC m=+106.850806206" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.343191 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.343238 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.343251 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.343271 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.343283 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.356350 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podStartSLOduration=86.356332733 podStartE2EDuration="1m26.356332733s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.35623388 +0000 UTC m=+106.877978678" watchObservedRunningTime="2025-12-09 14:16:08.356332733 +0000 UTC m=+106.878077531" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.389643 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pmt9f"] Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.389755 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:08 crc kubenswrapper[5116]: E1209 14:16:08.389843 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.405535 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.405517642 podStartE2EDuration="17.405517642s" podCreationTimestamp="2025-12-09 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.405197053 +0000 UTC m=+106.926941871" watchObservedRunningTime="2025-12-09 14:16:08.405517642 +0000 UTC m=+106.927262440" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.445707 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.445779 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.445799 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.445823 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.445841 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.451849 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" podStartSLOduration=85.451826184 podStartE2EDuration="1m25.451826184s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.436925908 +0000 UTC m=+106.958670706" watchObservedRunningTime="2025-12-09 14:16:08.451826184 +0000 UTC m=+106.973570992" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.452547 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-554lf" podStartSLOduration=86.452539493 podStartE2EDuration="1m26.452539493s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.452000979 +0000 UTC m=+106.973745807" watchObservedRunningTime="2025-12-09 14:16:08.452539493 +0000 UTC m=+106.974284291" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.476883 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.47685928 podStartE2EDuration="17.47685928s" podCreationTimestamp="2025-12-09 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.475392331 +0000 UTC m=+106.997137139" watchObservedRunningTime="2025-12-09 14:16:08.47685928 +0000 UTC m=+106.998604108" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.488632 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.488617173 podStartE2EDuration="17.488617173s" podCreationTimestamp="2025-12-09 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:08.487622516 +0000 UTC m=+107.009367314" watchObservedRunningTime="2025-12-09 14:16:08.488617173 +0000 UTC m=+107.010361971" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.548943 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.549005 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.549018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.549033 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.549046 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.652517 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.652569 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.652589 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.652614 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.652632 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.754620 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.754660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.754671 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.754686 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.754698 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.857288 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.857327 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.857338 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.857355 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.857366 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.960335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.960731 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.960749 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.960773 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:08 crc kubenswrapper[5116]: I1209 14:16:08.960793 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:08Z","lastTransitionTime":"2025-12-09T14:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.062919 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.063014 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.063035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.063061 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.063080 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.164752 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.164810 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.164829 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.164855 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.164872 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.267731 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.267796 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.267815 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.267842 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.267866 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.370066 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.370126 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.370142 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.370163 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.370180 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.475545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.475616 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.475643 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.475664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.475678 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.577267 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.577314 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.577327 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.577345 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.577358 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.680323 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.680392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.680417 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.680446 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.680472 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.748389 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.748387 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.748646 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:09 crc kubenswrapper[5116]: E1209 14:16:09.748562 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:16:09 crc kubenswrapper[5116]: E1209 14:16:09.748885 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.748924 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:09 crc kubenswrapper[5116]: E1209 14:16:09.749146 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:09 crc kubenswrapper[5116]: E1209 14:16:09.749287 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.783410 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.783482 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.783502 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.783527 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.783544 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.886171 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.886249 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.886273 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.886305 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.886328 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.989148 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.989297 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.989359 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.989384 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:09 crc kubenswrapper[5116]: I1209 14:16:09.989440 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:09Z","lastTransitionTime":"2025-12-09T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.091853 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.091909 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.091921 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.091939 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.091951 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.195207 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.195275 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.195296 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.195324 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.195342 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.298078 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.298149 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.298174 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.298212 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.298235 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.400818 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.400897 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.400915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.400948 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.401007 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.503509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.503567 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.503584 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.503610 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.503629 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.606250 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.606303 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.606320 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.606343 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.606357 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.709569 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.709631 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.709681 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.709705 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.709726 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.813713 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.813812 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.813851 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.813885 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.813911 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.916231 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.916298 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.916316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.916343 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:10 crc kubenswrapper[5116]: I1209 14:16:10.916361 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:10Z","lastTransitionTime":"2025-12-09T14:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.019114 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.019190 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.019209 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.019234 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.019252 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.121676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.121745 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.121766 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.121792 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.121813 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.225013 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.225083 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.225101 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.225125 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.225143 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.327902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.327994 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.328019 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.328046 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.328065 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.430181 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.430217 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.430226 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.430238 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.430248 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.532816 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.532883 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.532908 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.532937 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.533006 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.635451 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.635498 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.635508 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.635521 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.635531 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.738793 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.738867 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.738887 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.738912 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.738930 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.750649 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.750795 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:11 crc kubenswrapper[5116]: E1209 14:16:11.750866 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:16:11 crc kubenswrapper[5116]: E1209 14:16:11.750892 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.750928 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:11 crc kubenswrapper[5116]: E1209 14:16:11.751038 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.751388 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:11 crc kubenswrapper[5116]: E1209 14:16:11.751558 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmt9f" podUID="51843597-ba2b-4059-aa79-13887c6100f2" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.840939 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.841032 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.841054 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.841080 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.841098 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.943440 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.943529 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.943554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.943589 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:11 crc kubenswrapper[5116]: I1209 14:16:11.943615 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:11Z","lastTransitionTime":"2025-12-09T14:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.045620 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.045690 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.045710 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.045733 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.045749 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.148389 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.148450 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.148469 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.148488 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.148501 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.251477 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.251577 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.251598 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.251627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.251646 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.354117 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.354184 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.354205 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.354230 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.354247 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.456531 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.456604 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.456628 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.456661 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.456684 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.558876 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.558991 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.559018 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.559050 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.559071 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.661413 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.661477 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.661492 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.661512 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.661525 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.763846 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.763902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.763913 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.763932 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.763944 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:16:12Z","lastTransitionTime":"2025-12-09T14:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.866234 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.866302 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.866322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.866344 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.866509 5116 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 09 14:16:12 crc kubenswrapper[5116]: I1209 14:16:12.915614 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g2vff"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.362060 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-tx992"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.362236 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.365983 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-52tsw"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.366096 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.368358 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.368453 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.368851 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.369574 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.369634 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.369852 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.370120 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-8sdgn"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.370239 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.372916 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-42qdh"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.373169 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.378269 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.378575 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.380908 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.381124 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.381170 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.381344 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.381381 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.382095 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.382092 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.382176 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.382924 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.383241 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.383505 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.385217 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-x4svw"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.389032 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.389280 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.389405 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.390645 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.390833 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.390892 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.391014 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.391187 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.395421 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.395451 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.395588 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.403472 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-8k9f4"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.406449 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.406758 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.407570 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.408924 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.409382 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.411659 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.412142 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.416830 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.416936 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.417311 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.417718 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.417888 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.418149 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.418321 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.418894 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.421701 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.421917 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.422104 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.422757 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.424180 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.424375 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.426888 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.427051 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.430467 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.430638 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.430858 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.434075 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-kdm79"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.434760 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.434813 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.435138 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.435487 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.435621 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.435835 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.436093 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.436400 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.436796 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.436975 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.437095 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.437105 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.437703 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.437751 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.439115 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.439740 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.441406 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.441779 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.448722 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.449205 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.460267 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.460734 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.460907 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.461594 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-zssql"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.462639 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.463565 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.464151 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.465369 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.466584 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.466861 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.467108 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.467189 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.467380 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.467599 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.473480 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5rkz7"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.476028 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.476216 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.476250 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.485673 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.486526 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.486842 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.487193 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.487343 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.487742 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.487814 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.488026 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.488742 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493240 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493275 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493276 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493241 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493390 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493407 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493412 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493491 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493503 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493561 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493595 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493641 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493788 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493875 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.493895 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.494039 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.494103 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.494132 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.494367 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.494898 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.496788 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.497702 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.497984 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.502425 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.503787 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504092 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-serving-cert\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504203 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d229a45-586a-4cf8-9e25-fd80224017fb-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504252 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69t9\" (UniqueName: \"kubernetes.io/projected/89cfa976-43a8-469f-ba75-7a630ae3e072-kube-api-access-n69t9\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504291 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfpb\" (UniqueName: \"kubernetes.io/projected/908467fd-ce00-441d-a504-dce785c290f2-kube-api-access-spfpb\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504313 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bbm\" (UniqueName: \"kubernetes.io/projected/22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad-kube-api-access-w9bbm\") pod \"downloads-747b44746d-tx992\" (UID: \"22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad\") " pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504329 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f436eb65-e5b7-4b61-9072-699f6c071102-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504347 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/627f3889-5fe0-4a44-9def-9363af7a5979-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504458 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504629 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ae31159-3efc-4516-830c-cabd140b3a6b-tmp-dir\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504846 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f436eb65-e5b7-4b61-9072-699f6c071102-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504904 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504932 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br69p\" (UniqueName: \"kubernetes.io/projected/f436eb65-e5b7-4b61-9072-699f6c071102-kube-api-access-br69p\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504973 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-audit-dir\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.504997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-client-ca\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505054 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgfv\" (UniqueName: \"kubernetes.io/projected/9ae31159-3efc-4516-830c-cabd140b3a6b-kube-api-access-dpgfv\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505080 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d229a45-586a-4cf8-9e25-fd80224017fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505132 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-trusted-ca-bundle\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505158 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505181 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-tmp\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505218 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3706088-2315-4b35-852b-1327e8a99d18-console-serving-cert\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505242 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-available-featuregates\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505264 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58t5p\" (UniqueName: \"kubernetes.io/projected/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-kube-api-access-58t5p\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505285 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-config\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505310 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f436eb65-e5b7-4b61-9072-699f6c071102-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505336 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-machine-approver-tls\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505362 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627f3889-5fe0-4a44-9def-9363af7a5979-config\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505389 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c88c0b-1e12-405a-96ef-49bab04d20f5-config\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505413 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627f3889-5fe0-4a44-9def-9363af7a5979-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505433 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6214c5-1554-43a3-82d3-65532d7a79a4-serving-cert\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505452 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f436eb65-e5b7-4b61-9072-699f6c071102-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505477 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ptb\" (UniqueName: \"kubernetes.io/projected/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-kube-api-access-w6ptb\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505485 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505581 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505499 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-serving-cert\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505646 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/89c88c0b-1e12-405a-96ef-49bab04d20f5-images\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505667 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-ca\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505718 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/f436eb65-e5b7-4b61-9072-699f6c071102-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505753 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97jqs\" (UniqueName: \"kubernetes.io/projected/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-kube-api-access-97jqs\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505777 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-etcd-client\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505798 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e396c726-e045-447e-9420-93f09255e695-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-km7vh\" (UID: \"e396c726-e045-447e-9420-93f09255e695\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505819 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-audit-policies\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505841 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505867 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3706088-2315-4b35-852b-1327e8a99d18-console-oauth-config\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505902 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505929 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzx5\" (UniqueName: \"kubernetes.io/projected/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-kube-api-access-5rzx5\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.505987 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d229a45-586a-4cf8-9e25-fd80224017fb-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506008 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmzg7\" (UniqueName: \"kubernetes.io/projected/89c88c0b-1e12-405a-96ef-49bab04d20f5-kube-api-access-vmzg7\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506024 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-auth-proxy-config\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506043 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-etcd-serving-ca\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506059 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506081 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-serving-cert\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506100 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506121 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506138 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506154 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-console-config\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506169 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhh8\" (UniqueName: \"kubernetes.io/projected/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-kube-api-access-6dhh8\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506184 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-tmp-dir\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506201 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-config\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506215 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89cfa976-43a8-469f-ba75-7a630ae3e072-config\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506233 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89cfa976-43a8-469f-ba75-7a630ae3e072-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506250 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-audit-policies\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506264 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c88c0b-1e12-405a-96ef-49bab04d20f5-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506279 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-config\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506293 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ae31159-3efc-4516-830c-cabd140b3a6b-metrics-tls\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506308 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-client-ca\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506328 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m59ct\" (UniqueName: \"kubernetes.io/projected/f3706088-2315-4b35-852b-1327e8a99d18-kube-api-access-m59ct\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506343 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xppl\" (UniqueName: \"kubernetes.io/projected/e396c726-e045-447e-9420-93f09255e695-kube-api-access-7xppl\") pod \"cluster-samples-operator-6b564684c8-km7vh\" (UID: \"e396c726-e045-447e-9420-93f09255e695\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506363 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25d2\" (UniqueName: \"kubernetes.io/projected/4a6214c5-1554-43a3-82d3-65532d7a79a4-kube-api-access-p25d2\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506378 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506392 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-serving-cert\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506409 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-oauth-serving-cert\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506424 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-encryption-config\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506444 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d229a45-586a-4cf8-9e25-fd80224017fb-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506461 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4a6214c5-1554-43a3-82d3-65532d7a79a4-tmp\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506475 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/908467fd-ce00-441d-a504-dce785c290f2-audit-dir\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506489 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-config\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506516 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-service-ca\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506531 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506545 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506563 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-trusted-ca-bundle\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506580 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkvh\" (UniqueName: \"kubernetes.io/projected/627f3889-5fe0-4a44-9def-9363af7a5979-kube-api-access-cmkvh\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506595 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506627 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-client\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.506642 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d229a45-586a-4cf8-9e25-fd80224017fb-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.508019 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.508106 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.508557 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.509138 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.511801 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.511937 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.517152 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.517052 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-nqzwb"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.520461 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.520639 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.523146 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-d772b"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.523266 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.526443 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.526735 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.526862 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.530327 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k9645"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.534432 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.534646 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.535580 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.541810 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.541990 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.543640 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.545258 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.545559 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.547862 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.548128 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.550242 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.550545 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.553150 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-xd87d"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.553472 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.558933 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.559044 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.564019 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.567141 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.567506 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.570376 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.570537 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.575389 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.575610 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.581669 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-gsv2s"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.581872 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585118 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g2vff"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585143 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-42qdh"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585197 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-x4svw"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585209 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585221 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-tx992"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585233 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-8sdgn"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585269 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.585305 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9ctzk"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.592695 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k8jnm"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.593032 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.593334 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.595332 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.596323 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fbkc6"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.596594 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.599587 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6wc7f"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.599858 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.603864 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2gk88"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.604184 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.604607 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607591 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-image-import-ca\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607629 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607682 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/908467fd-ce00-441d-a504-dce785c290f2-audit-dir\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607721 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-service-ca\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607769 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607797 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607807 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607833 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607846 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607847 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-trusted-ca-bundle\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607859 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k8jnm"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607867 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-encryption-config\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607884 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607918 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d229a45-586a-4cf8-9e25-fd80224017fb-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607947 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608304 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/908467fd-ce00-441d-a504-dce785c290f2-audit-dir\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.607870 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608471 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-nqzwb"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608530 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-d772b"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608582 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608643 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-52tsw"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608702 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608754 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608811 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608863 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608918 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-8k9f4"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609000 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609064 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609119 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-zssql"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609182 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609245 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609302 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609354 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fbkc6"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609469 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609532 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609592 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5rkz7"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609646 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609705 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609761 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k9645"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609814 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609874 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2gk88"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609931 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-gsv2s"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609998 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610051 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4"] Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608646 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d229a45-586a-4cf8-9e25-fd80224017fb-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610191 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n69t9\" (UniqueName: \"kubernetes.io/projected/89cfa976-43a8-469f-ba75-7a630ae3e072-kube-api-access-n69t9\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610266 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b0adc6af-a11b-4bad-83bc-e1be5945c05f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610347 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spfpb\" (UniqueName: \"kubernetes.io/projected/908467fd-ce00-441d-a504-dce785c290f2-kube-api-access-spfpb\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610418 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtcg\" (UniqueName: \"kubernetes.io/projected/42fe2705-f9ee-4e26-8e56-2730ba8f6196-kube-api-access-wbtcg\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610374 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-trusted-ca-bundle\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609939 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608681 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d229a45-586a-4cf8-9e25-fd80224017fb-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.608622 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610488 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4sg\" (UniqueName: \"kubernetes.io/projected/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-kube-api-access-ww4sg\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.609437 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-service-ca\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610771 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7bl\" (UniqueName: \"kubernetes.io/projected/fdd55bc4-7b67-4fca-9873-1198fb68274b-kube-api-access-6x7bl\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610805 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9bbm\" (UniqueName: \"kubernetes.io/projected/22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad-kube-api-access-w9bbm\") pod \"downloads-747b44746d-tx992\" (UID: \"22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad\") " pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610828 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drrpj\" (UniqueName: \"kubernetes.io/projected/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-kube-api-access-drrpj\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610851 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62cc8950-967d-4052-a42a-8b4223a1f9ab-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610872 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610899 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/627f3889-5fe0-4a44-9def-9363af7a5979-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610921 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ae31159-3efc-4516-830c-cabd140b3a6b-tmp-dir\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.610975 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-default-certificate\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611003 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-client-ca\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611025 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d229a45-586a-4cf8-9e25-fd80224017fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611049 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b42f0915-c83b-4abd-a4ba-144cd754c9a6-apiservice-cert\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611081 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-trusted-ca-bundle\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611103 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-tmp\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611124 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzcpr\" (UniqueName: \"kubernetes.io/projected/fc5fc2fd-0910-4e85-a543-060ec0dff17a-kube-api-access-mzcpr\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611149 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611177 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58t5p\" (UniqueName: \"kubernetes.io/projected/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-kube-api-access-58t5p\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611199 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-serving-cert\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611220 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crzr\" (UniqueName: \"kubernetes.io/projected/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-kube-api-access-2crzr\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611245 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-config\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611266 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdw9\" (UniqueName: \"kubernetes.io/projected/8699f161-ee9b-4da7-9074-6af031d37b61-kube-api-access-grdw9\") pod \"multus-admission-controller-69db94689b-nqzwb\" (UID: \"8699f161-ee9b-4da7-9074-6af031d37b61\") " pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611282 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ae31159-3efc-4516-830c-cabd140b3a6b-tmp-dir\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611289 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627f3889-5fe0-4a44-9def-9363af7a5979-config\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611321 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/10858c0e-9efb-4468-bfec-3ef3aa1a6579-signing-cabundle\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611345 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n86ff\" (UniqueName: \"kubernetes.io/projected/ee06afad-5b97-43f1-b61b-cd275363814c-kube-api-access-n86ff\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611347 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d229a45-586a-4cf8-9e25-fd80224017fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611366 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/627f3889-5fe0-4a44-9def-9363af7a5979-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611377 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-audit\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611436 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0adc6af-a11b-4bad-83bc-e1be5945c05f-audit-dir\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611470 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6214c5-1554-43a3-82d3-65532d7a79a4-serving-cert\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611491 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f436eb65-e5b7-4b61-9072-699f6c071102-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611514 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62cc8950-967d-4052-a42a-8b4223a1f9ab-srv-cert\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611539 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611597 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/f436eb65-e5b7-4b61-9072-699f6c071102-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611619 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3316637-1d62-4c98-a599-437d5d706de7-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611638 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-etcd-client\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611672 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-97jqs\" (UniqueName: \"kubernetes.io/projected/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-kube-api-access-97jqs\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611793 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd55bc4-7b67-4fca-9873-1198fb68274b-serving-cert\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611840 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-audit-policies\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611864 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611889 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-config\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611910 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fe2705-f9ee-4e26-8e56-2730ba8f6196-tmp\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.611999 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3706088-2315-4b35-852b-1327e8a99d18-console-oauth-config\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.612049 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmzg7\" (UniqueName: \"kubernetes.io/projected/89c88c0b-1e12-405a-96ef-49bab04d20f5-kube-api-access-vmzg7\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.612074 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-auth-proxy-config\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.613840 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-etcd-serving-ca\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.614244 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627f3889-5fe0-4a44-9def-9363af7a5979-config\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615113 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-trusted-ca-bundle\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615324 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-client-ca\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615609 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d229a45-586a-4cf8-9e25-fd80224017fb-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615611 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-config\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615609 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615851 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-tmp\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615901 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rgq\" (UniqueName: \"kubernetes.io/projected/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-kube-api-access-g8rgq\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.615946 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616044 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616086 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a24e13f1-113b-4044-842b-a13d6d620655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zttss\" (UID: \"a24e13f1-113b-4044-842b-a13d6d620655\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616047 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/f436eb65-e5b7-4b61-9072-699f6c071102-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616372 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-audit-policies\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616379 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-console-config\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616455 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dhh8\" (UniqueName: \"kubernetes.io/projected/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-kube-api-access-6dhh8\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616487 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-tmp-dir\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616663 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-config\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.616718 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89cfa976-43a8-469f-ba75-7a630ae3e072-config\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.617122 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-tmp-dir\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.617166 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f436eb65-e5b7-4b61-9072-699f6c071102-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.617770 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-etcd-serving-ca\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.617905 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89cfa976-43a8-469f-ba75-7a630ae3e072-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.617930 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-auth-proxy-config\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.617977 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5fc2fd-0910-4e85-a543-060ec0dff17a-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.618069 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-audit-policies\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.618113 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c88c0b-1e12-405a-96ef-49bab04d20f5-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.618485 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-console-config\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.618591 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-config\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.618654 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-client-ca\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.619387 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89cfa976-43a8-469f-ba75-7a630ae3e072-config\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.619396 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.619814 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-audit-policies\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.619925 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m59ct\" (UniqueName: \"kubernetes.io/projected/f3706088-2315-4b35-852b-1327e8a99d18-kube-api-access-m59ct\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620069 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xppl\" (UniqueName: \"kubernetes.io/projected/e396c726-e045-447e-9420-93f09255e695-kube-api-access-7xppl\") pod \"cluster-samples-operator-6b564684c8-km7vh\" (UID: \"e396c726-e045-447e-9420-93f09255e695\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620152 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b42f0915-c83b-4abd-a4ba-144cd754c9a6-webhook-cert\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620162 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-config\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620305 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p25d2\" (UniqueName: \"kubernetes.io/projected/4a6214c5-1554-43a3-82d3-65532d7a79a4-kube-api-access-p25d2\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620408 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620618 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-client-ca\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620650 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-encryption-config\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620687 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d229a45-586a-4cf8-9e25-fd80224017fb-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620728 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3316637-1d62-4c98-a599-437d5d706de7-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620857 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclzk\" (UniqueName: \"kubernetes.io/projected/327bd87a-2375-4b04-b49f-173966bba4fc-kube-api-access-dclzk\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.620949 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4a6214c5-1554-43a3-82d3-65532d7a79a4-tmp\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621037 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-config\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621101 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621065 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-config\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621305 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3706088-2315-4b35-852b-1327e8a99d18-console-oauth-config\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621397 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4a6214c5-1554-43a3-82d3-65532d7a79a4-tmp\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621467 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/10858c0e-9efb-4468-bfec-3ef3aa1a6579-signing-key\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmkvh\" (UniqueName: \"kubernetes.io/projected/627f3889-5fe0-4a44-9def-9363af7a5979-kube-api-access-cmkvh\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621558 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8699f161-ee9b-4da7-9074-6af031d37b61-webhook-certs\") pod \"multus-admission-controller-69db94689b-nqzwb\" (UID: \"8699f161-ee9b-4da7-9074-6af031d37b61\") " pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621610 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-client\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621648 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxfj\" (UniqueName: \"kubernetes.io/projected/62cc8950-967d-4052-a42a-8b4223a1f9ab-kube-api-access-ncxfj\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621676 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-secret-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.621941 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c88c0b-1e12-405a-96ef-49bab04d20f5-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.622087 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-config\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.622469 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj78b\" (UniqueName: \"kubernetes.io/projected/1c05eb02-555a-4906-85e9-3a0eac0cdbc2-kube-api-access-hj78b\") pod \"migrator-866fcbc849-6htrq\" (UID: \"1c05eb02-555a-4906-85e9-3a0eac0cdbc2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.622790 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-serving-cert\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.622799 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89cfa976-43a8-469f-ba75-7a630ae3e072-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.623068 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdd55bc4-7b67-4fca-9873-1198fb68274b-trusted-ca\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.623284 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f436eb65-e5b7-4b61-9072-699f6c071102-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.623418 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-config\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.623536 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.623662 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f436eb65-e5b7-4b61-9072-699f6c071102-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.623770 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.624748 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625156 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625646 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl6fp\" (UniqueName: \"kubernetes.io/projected/a24e13f1-113b-4044-842b-a13d6d620655-kube-api-access-nl6fp\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zttss\" (UID: \"a24e13f1-113b-4044-842b-a13d6d620655\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625702 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-stats-auth\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625725 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625759 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625791 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-br69p\" (UniqueName: \"kubernetes.io/projected/f436eb65-e5b7-4b61-9072-699f6c071102-kube-api-access-br69p\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.625814 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5fc2fd-0910-4e85-a543-060ec0dff17a-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626254 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-audit-dir\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626300 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dpgfv\" (UniqueName: \"kubernetes.io/projected/9ae31159-3efc-4516-830c-cabd140b3a6b-kube-api-access-dpgfv\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626340 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626376 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kwz\" (UniqueName: \"kubernetes.io/projected/b42f0915-c83b-4abd-a4ba-144cd754c9a6-kube-api-access-d5kwz\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626411 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3706088-2315-4b35-852b-1327e8a99d18-console-serving-cert\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626443 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-available-featuregates\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626477 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f436eb65-e5b7-4b61-9072-699f6c071102-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626503 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee06afad-5b97-43f1-b61b-cd275363814c-config\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626534 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626568 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-machine-approver-tls\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626602 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626619 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626633 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c88c0b-1e12-405a-96ef-49bab04d20f5-config\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626659 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626688 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627f3889-5fe0-4a44-9def-9363af7a5979-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626717 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ptb\" (UniqueName: \"kubernetes.io/projected/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-kube-api-access-w6ptb\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626751 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-serving-cert\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626780 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/89c88c0b-1e12-405a-96ef-49bab04d20f5-images\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626802 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-ca\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626824 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626835 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3316637-1d62-4c98-a599-437d5d706de7-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626865 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-config\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626893 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-serving-cert\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626923 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9s4l\" (UniqueName: \"kubernetes.io/projected/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-kube-api-access-d9s4l\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626982 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-etcd-client\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627016 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e396c726-e045-447e-9420-93f09255e695-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-km7vh\" (UID: \"e396c726-e045-447e-9420-93f09255e695\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627044 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d229a45-586a-4cf8-9e25-fd80224017fb-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627156 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f436eb65-e5b7-4b61-9072-699f6c071102-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627211 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-available-featuregates\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627247 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/62cc8950-967d-4052-a42a-8b4223a1f9ab-tmpfs\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627280 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627302 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzx5\" (UniqueName: \"kubernetes.io/projected/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-kube-api-access-5rzx5\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627323 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd55bc4-7b67-4fca-9873-1198fb68274b-config\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627345 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3316637-1d62-4c98-a599-437d5d706de7-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627365 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627387 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9lg\" (UniqueName: \"kubernetes.io/projected/b0adc6af-a11b-4bad-83bc-e1be5945c05f-kube-api-access-wq9lg\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627413 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627435 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627454 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-serving-cert\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627475 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627498 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee06afad-5b97-43f1-b61b-cd275363814c-serving-cert\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627518 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627545 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-metrics-certs\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627570 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627598 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b42f0915-c83b-4abd-a4ba-144cd754c9a6-tmpfs\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627623 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ae31159-3efc-4516-830c-cabd140b3a6b-metrics-tls\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627646 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627671 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-serving-cert\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627691 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxnn2\" (UniqueName: \"kubernetes.io/projected/10858c0e-9efb-4468-bfec-3ef3aa1a6579-kube-api-access-kxnn2\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627694 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-audit-dir\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627716 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/327bd87a-2375-4b04-b49f-173966bba4fc-service-ca-bundle\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627779 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.627804 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-oauth-serving-cert\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.626271 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-serving-cert\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.628523 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3706088-2315-4b35-852b-1327e8a99d18-oauth-serving-cert\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.628637 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d229a45-586a-4cf8-9e25-fd80224017fb-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.628402 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6214c5-1554-43a3-82d3-65532d7a79a4-serving-cert\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.629526 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/89c88c0b-1e12-405a-96ef-49bab04d20f5-images\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.629707 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c88c0b-1e12-405a-96ef-49bab04d20f5-config\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.630136 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.630226 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-ca\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.630358 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f436eb65-e5b7-4b61-9072-699f6c071102-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.631559 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.631900 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.632502 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-etcd-client\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.632809 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627f3889-5fe0-4a44-9def-9363af7a5979-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.634160 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-encryption-config\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.633891 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.634807 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-machine-approver-tls\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.634888 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-etcd-client\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.635070 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3706088-2315-4b35-852b-1327e8a99d18-console-serving-cert\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.635508 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ae31159-3efc-4516-830c-cabd140b3a6b-metrics-tls\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.635643 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-serving-cert\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.636090 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-serving-cert\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.636243 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-serving-cert\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.636246 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.640882 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e396c726-e045-447e-9420-93f09255e695-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-km7vh\" (UID: \"e396c726-e045-447e-9420-93f09255e695\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.645584 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.646640 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.672268 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.685072 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.724538 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729559 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5fc2fd-0910-4e85-a543-060ec0dff17a-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729626 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b42f0915-c83b-4abd-a4ba-144cd754c9a6-webhook-cert\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729661 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3316637-1d62-4c98-a599-437d5d706de7-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729678 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dclzk\" (UniqueName: \"kubernetes.io/projected/327bd87a-2375-4b04-b49f-173966bba4fc-kube-api-access-dclzk\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729696 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729715 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/10858c0e-9efb-4468-bfec-3ef3aa1a6579-signing-key\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729732 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8699f161-ee9b-4da7-9074-6af031d37b61-webhook-certs\") pod \"multus-admission-controller-69db94689b-nqzwb\" (UID: \"8699f161-ee9b-4da7-9074-6af031d37b61\") " pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729754 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ncxfj\" (UniqueName: \"kubernetes.io/projected/62cc8950-967d-4052-a42a-8b4223a1f9ab-kube-api-access-ncxfj\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729770 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-secret-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729786 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hj78b\" (UniqueName: \"kubernetes.io/projected/1c05eb02-555a-4906-85e9-3a0eac0cdbc2-kube-api-access-hj78b\") pod \"migrator-866fcbc849-6htrq\" (UID: \"1c05eb02-555a-4906-85e9-3a0eac0cdbc2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729804 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdd55bc4-7b67-4fca-9873-1198fb68274b-trusted-ca\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729831 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-config\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729860 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nl6fp\" (UniqueName: \"kubernetes.io/projected/a24e13f1-113b-4044-842b-a13d6d620655-kube-api-access-nl6fp\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zttss\" (UID: \"a24e13f1-113b-4044-842b-a13d6d620655\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729875 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-stats-auth\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729893 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729908 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729925 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5fc2fd-0910-4e85-a543-060ec0dff17a-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729944 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kwz\" (UniqueName: \"kubernetes.io/projected/b42f0915-c83b-4abd-a4ba-144cd754c9a6-kube-api-access-d5kwz\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729981 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee06afad-5b97-43f1-b61b-cd275363814c-config\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.729998 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730016 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730033 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730062 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3316637-1d62-4c98-a599-437d5d706de7-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730080 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-config\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730095 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-serving-cert\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730112 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9s4l\" (UniqueName: \"kubernetes.io/projected/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-kube-api-access-d9s4l\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730131 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/62cc8950-967d-4052-a42a-8b4223a1f9ab-tmpfs\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730149 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd55bc4-7b67-4fca-9873-1198fb68274b-config\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730169 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfc8k\" (UniqueName: \"kubernetes.io/projected/1e0fc80f-04b1-4b8a-9c52-c615211955b0-kube-api-access-dfc8k\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730193 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3316637-1d62-4c98-a599-437d5d706de7-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730218 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730235 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9lg\" (UniqueName: \"kubernetes.io/projected/b0adc6af-a11b-4bad-83bc-e1be5945c05f-kube-api-access-wq9lg\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730252 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730269 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee06afad-5b97-43f1-b61b-cd275363814c-serving-cert\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730285 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730301 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-metrics-certs\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730320 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730338 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b42f0915-c83b-4abd-a4ba-144cd754c9a6-tmpfs\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730356 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730384 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kxnn2\" (UniqueName: \"kubernetes.io/projected/10858c0e-9efb-4468-bfec-3ef3aa1a6579-kube-api-access-kxnn2\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730403 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/327bd87a-2375-4b04-b49f-173966bba4fc-service-ca-bundle\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730427 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730449 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-srv-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730473 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-image-import-ca\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730492 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730537 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-encryption-config\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730566 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b0adc6af-a11b-4bad-83bc-e1be5945c05f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730635 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtcg\" (UniqueName: \"kubernetes.io/projected/42fe2705-f9ee-4e26-8e56-2730ba8f6196-kube-api-access-wbtcg\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730664 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4sg\" (UniqueName: \"kubernetes.io/projected/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-kube-api-access-ww4sg\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730690 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6x7bl\" (UniqueName: \"kubernetes.io/projected/fdd55bc4-7b67-4fca-9873-1198fb68274b-kube-api-access-6x7bl\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730718 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-drrpj\" (UniqueName: \"kubernetes.io/projected/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-kube-api-access-drrpj\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730734 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62cc8950-967d-4052-a42a-8b4223a1f9ab-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730750 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730777 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-default-certificate\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730796 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1e0fc80f-04b1-4b8a-9c52-c615211955b0-tmpfs\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730814 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b42f0915-c83b-4abd-a4ba-144cd754c9a6-apiservice-cert\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730840 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mzcpr\" (UniqueName: \"kubernetes.io/projected/fc5fc2fd-0910-4e85-a543-060ec0dff17a-kube-api-access-mzcpr\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730855 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730876 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-serving-cert\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730892 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2crzr\" (UniqueName: \"kubernetes.io/projected/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-kube-api-access-2crzr\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730910 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grdw9\" (UniqueName: \"kubernetes.io/projected/8699f161-ee9b-4da7-9074-6af031d37b61-kube-api-access-grdw9\") pod \"multus-admission-controller-69db94689b-nqzwb\" (UID: \"8699f161-ee9b-4da7-9074-6af031d37b61\") " pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.730985 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/10858c0e-9efb-4468-bfec-3ef3aa1a6579-signing-cabundle\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731013 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n86ff\" (UniqueName: \"kubernetes.io/projected/ee06afad-5b97-43f1-b61b-cd275363814c-kube-api-access-n86ff\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731038 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-audit\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731053 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0adc6af-a11b-4bad-83bc-e1be5945c05f-audit-dir\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731073 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62cc8950-967d-4052-a42a-8b4223a1f9ab-srv-cert\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731089 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731106 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3316637-1d62-4c98-a599-437d5d706de7-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731124 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-etcd-client\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731157 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd55bc4-7b67-4fca-9873-1198fb68274b-serving-cert\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731187 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-config\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731205 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fe2705-f9ee-4e26-8e56-2730ba8f6196-tmp\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731243 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8rgq\" (UniqueName: \"kubernetes.io/projected/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-kube-api-access-g8rgq\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731272 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a24e13f1-113b-4044-842b-a13d6d620655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zttss\" (UID: \"a24e13f1-113b-4044-842b-a13d6d620655\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.731307 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.732915 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3316637-1d62-4c98-a599-437d5d706de7-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.733071 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.733368 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd55bc4-7b67-4fca-9873-1198fb68274b-config\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.733438 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.733918 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b0adc6af-a11b-4bad-83bc-e1be5945c05f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.734118 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-image-import-ca\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.734917 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b42f0915-c83b-4abd-a4ba-144cd754c9a6-tmpfs\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.735048 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fe2705-f9ee-4e26-8e56-2730ba8f6196-tmp\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.735784 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0adc6af-a11b-4bad-83bc-e1be5945c05f-audit-dir\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.736090 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-audit\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.736149 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.736846 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-config\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.737016 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdd55bc4-7b67-4fca-9873-1198fb68274b-trusted-ca\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.737075 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0adc6af-a11b-4bad-83bc-e1be5945c05f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.737562 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/62cc8950-967d-4052-a42a-8b4223a1f9ab-tmpfs\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.739192 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-etcd-client\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.739981 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-serving-cert\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.740437 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b0adc6af-a11b-4bad-83bc-e1be5945c05f-encryption-config\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.740777 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd55bc4-7b67-4fca-9873-1198fb68274b-serving-cert\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.741460 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a24e13f1-113b-4044-842b-a13d6d620655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zttss\" (UID: \"a24e13f1-113b-4044-842b-a13d6d620655\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.745132 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.748349 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.748631 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.748815 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.749227 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.758475 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.766246 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.786130 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.804205 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.816177 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5fc2fd-0910-4e85-a543-060ec0dff17a-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.825345 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.833138 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1e0fc80f-04b1-4b8a-9c52-c615211955b0-tmpfs\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.833232 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.833331 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfc8k\" (UniqueName: \"kubernetes.io/projected/1e0fc80f-04b1-4b8a-9c52-c615211955b0-kube-api-access-dfc8k\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.833370 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-srv-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.834431 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1e0fc80f-04b1-4b8a-9c52-c615211955b0-tmpfs\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.844847 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.847699 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5fc2fd-0910-4e85-a543-060ec0dff17a-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.865427 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.885773 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.904591 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.917847 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.924341 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.944161 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.964278 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5116]: I1209 14:16:13.984680 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.004238 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.024301 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.044011 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.049410 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b42f0915-c83b-4abd-a4ba-144cd754c9a6-apiservice-cert\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.057889 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b42f0915-c83b-4abd-a4ba-144cd754c9a6-webhook-cert\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.063977 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.071195 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8699f161-ee9b-4da7-9074-6af031d37b61-webhook-certs\") pod \"multus-admission-controller-69db94689b-nqzwb\" (UID: \"8699f161-ee9b-4da7-9074-6af031d37b61\") " pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.084812 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.105176 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.114420 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.127580 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.145255 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.165171 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.177925 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-serving-cert\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.184808 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.211330 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.213024 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.225182 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.236031 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-config\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.245372 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.250471 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/10858c0e-9efb-4468-bfec-3ef3aa1a6579-signing-key\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.265108 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.284826 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.304336 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.315747 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/10858c0e-9efb-4468-bfec-3ef3aa1a6579-signing-cabundle\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.325784 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.345083 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.357613 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.376768 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.385311 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.385397 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.404764 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.424024 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.437041 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3316637-1d62-4c98-a599-437d5d706de7-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.444378 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.447190 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3316637-1d62-4c98-a599-437d5d706de7-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.465113 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.484729 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.504761 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.524858 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.543154 5116 request.go:752] "Waited before sending request" delay="1.000865693s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.545013 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.557598 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee06afad-5b97-43f1-b61b-cd275363814c-serving-cert\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.564367 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.567114 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee06afad-5b97-43f1-b61b-cd275363814c-config\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.584613 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.604903 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.624907 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.629039 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-secret-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.635361 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62cc8950-967d-4052-a42a-8b4223a1f9ab-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.637740 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.645802 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.650879 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62cc8950-967d-4052-a42a-8b4223a1f9ab-srv-cert\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.664095 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.684799 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.704609 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.713308 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.732265 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.732357 5116 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.732465 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-config podName:e1331c14-9b1f-4b01-85fa-e2a4fafd3da6 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.232438348 +0000 UTC m=+113.754183146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-config") pod "kube-controller-manager-operator-69d5f845f8-jd2l4" (UID: "e1331c14-9b1f-4b01-85fa-e2a4fafd3da6") : failed to sync configmap cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.732753 5116 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.732805 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-package-server-manager-serving-cert podName:16b3601d-79fa-43bf-ac32-ff4af19c5a3f nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.232793358 +0000 UTC m=+113.754538156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-2r96f" (UID: "16b3601d-79fa-43bf-ac32-ff4af19c5a3f") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.732810 5116 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.733007 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-metrics-certs podName:327bd87a-2375-4b04-b49f-173966bba4fc nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.232937652 +0000 UTC m=+113.754682460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-metrics-certs") pod "router-default-68cf44c8b8-xd87d" (UID: "327bd87a-2375-4b04-b49f-173966bba4fc") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.733271 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.733345 5116 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.733392 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/327bd87a-2375-4b04-b49f-173966bba4fc-service-ca-bundle podName:327bd87a-2375-4b04-b49f-173966bba4fc nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.233378793 +0000 UTC m=+113.755123591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/327bd87a-2375-4b04-b49f-173966bba4fc-service-ca-bundle") pod "router-default-68cf44c8b8-xd87d" (UID: "327bd87a-2375-4b04-b49f-173966bba4fc") : failed to sync configmap cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.735042 5116 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.735111 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume podName:03bd8401-ab8b-4d8c-a1a8-d9341a7becf9 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.235096939 +0000 UTC m=+113.756841747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume") pod "collect-profiles-29421495-49k9q" (UID: "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9") : failed to sync configmap cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.736198 5116 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.736282 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-default-certificate podName:327bd87a-2375-4b04-b49f-173966bba4fc nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.23626286 +0000 UTC m=+113.758007698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-default-certificate") pod "router-default-68cf44c8b8-xd87d" (UID: "327bd87a-2375-4b04-b49f-173966bba4fc") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.737286 5116 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.737340 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-serving-cert podName:e1331c14-9b1f-4b01-85fa-e2a4fafd3da6 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.237328268 +0000 UTC m=+113.759073056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-serving-cert") pod "kube-controller-manager-operator-69d5f845f8-jd2l4" (UID: "e1331c14-9b1f-4b01-85fa-e2a4fafd3da6") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.737354 5116 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.737409 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-stats-auth podName:327bd87a-2375-4b04-b49f-173966bba4fc nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.23739446 +0000 UTC m=+113.759139298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-stats-auth") pod "router-default-68cf44c8b8-xd87d" (UID: "327bd87a-2375-4b04-b49f-173966bba4fc") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.744074 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.764018 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.784920 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.805061 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.825876 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.834384 5116 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: E1209 14:16:14.834483 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-srv-cert podName:1e0fc80f-04b1-4b8a-9c52-c615211955b0 nodeName:}" failed. No retries permitted until 2025-12-09 14:16:15.334454593 +0000 UTC m=+113.856199421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-srv-cert") pod "olm-operator-5cdf44d969-xphvn" (UID: "1e0fc80f-04b1-4b8a-9c52-c615211955b0") : failed to sync secret cache: timed out waiting for the condition Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.844910 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.865094 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.884594 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.904614 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.925022 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.945707 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.966168 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 09 14:16:14 crc kubenswrapper[5116]: I1209 14:16:14.985787 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.006348 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.026541 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.044693 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.065665 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.085631 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.104457 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.124537 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.145136 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.164388 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.185429 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.205423 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.225999 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.245109 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.259920 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-default-certificate\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.260096 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-config\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.260751 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-config\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.261446 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.261492 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.261580 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-stats-auth\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.261611 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.262139 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-metrics-certs\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.262169 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.262209 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/327bd87a-2375-4b04-b49f-173966bba4fc-service-ca-bundle\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.264424 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/327bd87a-2375-4b04-b49f-173966bba4fc-service-ca-bundle\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.265622 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.265617 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-default-certificate\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.266189 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-stats-auth\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.266569 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.273834 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.274395 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/327bd87a-2375-4b04-b49f-173966bba4fc-metrics-certs\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.285612 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.305584 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.324811 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.344494 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.364606 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.364661 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-srv-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.370102 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1e0fc80f-04b1-4b8a-9c52-c615211955b0-srv-cert\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.384664 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.404332 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.424196 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.446153 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.464496 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.484914 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.504854 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.524470 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.543312 5116 request.go:752] "Waited before sending request" delay="1.93280774s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.574551 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69t9\" (UniqueName: \"kubernetes.io/projected/89cfa976-43a8-469f-ba75-7a630ae3e072-kube-api-access-n69t9\") pod \"openshift-apiserver-operator-846cbfc458-9qk2k\" (UID: \"89cfa976-43a8-469f-ba75-7a630ae3e072\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.592831 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfpb\" (UniqueName: \"kubernetes.io/projected/908467fd-ce00-441d-a504-dce785c290f2-kube-api-access-spfpb\") pod \"oauth-openshift-66458b6674-8k9f4\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.604459 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9bbm\" (UniqueName: \"kubernetes.io/projected/22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad-kube-api-access-w9bbm\") pod \"downloads-747b44746d-tx992\" (UID: \"22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad\") " pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.627003 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58t5p\" (UniqueName: \"kubernetes.io/projected/5e9a5cea-a4d8-4f7e-8ad4-708b692c7372-kube-api-access-58t5p\") pod \"openshift-config-operator-5777786469-x4svw\" (UID: \"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.639608 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.657703 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-97jqs\" (UniqueName: \"kubernetes.io/projected/f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9-kube-api-access-97jqs\") pod \"machine-approver-54c688565-kdm79\" (UID: \"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.658116 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.673833 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmzg7\" (UniqueName: \"kubernetes.io/projected/89c88c0b-1e12-405a-96ef-49bab04d20f5-kube-api-access-vmzg7\") pod \"machine-api-operator-755bb95488-52tsw\" (UID: \"89c88c0b-1e12-405a-96ef-49bab04d20f5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.689225 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.694243 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dhh8\" (UniqueName: \"kubernetes.io/projected/bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae-kube-api-access-6dhh8\") pod \"apiserver-8596bd845d-mz9vv\" (UID: \"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.704977 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.713157 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m59ct\" (UniqueName: \"kubernetes.io/projected/f3706088-2315-4b35-852b-1327e8a99d18-kube-api-access-m59ct\") pod \"console-64d44f6ddf-8sdgn\" (UID: \"f3706088-2315-4b35-852b-1327e8a99d18\") " pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:15 crc kubenswrapper[5116]: W1209 14:16:15.715611 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4a319d1_e0a4_4b2d_a40a_b5761f1b8cf9.slice/crio-9326e76578c1ab9b08dc935d2aa612b70b469ae3b899dcc48f0a9ef35f1f9bb7 WatchSource:0}: Error finding container 9326e76578c1ab9b08dc935d2aa612b70b469ae3b899dcc48f0a9ef35f1f9bb7: Status 404 returned error can't find the container with id 9326e76578c1ab9b08dc935d2aa612b70b469ae3b899dcc48f0a9ef35f1f9bb7 Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.724056 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xppl\" (UniqueName: \"kubernetes.io/projected/e396c726-e045-447e-9420-93f09255e695-kube-api-access-7xppl\") pod \"cluster-samples-operator-6b564684c8-km7vh\" (UID: \"e396c726-e045-447e-9420-93f09255e695\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.750888 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25d2\" (UniqueName: \"kubernetes.io/projected/4a6214c5-1554-43a3-82d3-65532d7a79a4-kube-api-access-p25d2\") pod \"controller-manager-65b6cccf98-g2vff\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.773376 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d229a45-586a-4cf8-9e25-fd80224017fb-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-pw8wg\" (UID: \"8d229a45-586a-4cf8-9e25-fd80224017fb\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.783466 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmkvh\" (UniqueName: \"kubernetes.io/projected/627f3889-5fe0-4a44-9def-9363af7a5979-kube-api-access-cmkvh\") pod \"openshift-controller-manager-operator-686468bdd5-kmf69\" (UID: \"627f3889-5fe0-4a44-9def-9363af7a5979\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.786390 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.800549 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-br69p\" (UniqueName: \"kubernetes.io/projected/f436eb65-e5b7-4b61-9072-699f6c071102-kube-api-access-br69p\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.812936 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.820330 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpgfv\" (UniqueName: \"kubernetes.io/projected/9ae31159-3efc-4516-830c-cabd140b3a6b-kube-api-access-dpgfv\") pod \"dns-operator-799b87ffcd-42qdh\" (UID: \"9ae31159-3efc-4516-830c-cabd140b3a6b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.839503 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f436eb65-e5b7-4b61-9072-699f6c071102-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gj2tv\" (UID: \"f436eb65-e5b7-4b61-9072-699f6c071102\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.860639 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ptb\" (UniqueName: \"kubernetes.io/projected/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-kube-api-access-w6ptb\") pod \"route-controller-manager-776cdc94d6-r7lsl\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.883239 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzx5\" (UniqueName: \"kubernetes.io/projected/1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb-kube-api-access-5rzx5\") pod \"etcd-operator-69b85846b6-wrw7v\" (UID: \"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.885808 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-x4svw"] Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.893162 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.902985 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.912593 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.920833 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.922562 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k"] Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.934110 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.947225 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4sg\" (UniqueName: \"kubernetes.io/projected/16b3601d-79fa-43bf-ac32-ff4af19c5a3f-kube-api-access-ww4sg\") pod \"package-server-manager-77f986bd66-2r96f\" (UID: \"16b3601d-79fa-43bf-ac32-ff4af19c5a3f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.948985 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.955366 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-8k9f4"] Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.964744 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3316637-1d62-4c98-a599-437d5d706de7-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dnlz8\" (UID: \"f3316637-1d62-4c98-a599-437d5d706de7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.969340 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.975370 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.989029 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-drrpj\" (UniqueName: \"kubernetes.io/projected/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-kube-api-access-drrpj\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:15 crc kubenswrapper[5116]: I1209 14:16:15.997589 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.003579 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxnn2\" (UniqueName: \"kubernetes.io/projected/10858c0e-9efb-4468-bfec-3ef3aa1a6579-kube-api-access-kxnn2\") pod \"service-ca-74545575db-d772b\" (UID: \"10858c0e-9efb-4468-bfec-3ef3aa1a6579\") " pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.010176 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g2vff"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.017027 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.027350 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x7bl\" (UniqueName: \"kubernetes.io/projected/fdd55bc4-7b67-4fca-9873-1198fb68274b-kube-api-access-6x7bl\") pod \"console-operator-67c89758df-zssql\" (UID: \"fdd55bc4-7b67-4fca-9873-1198fb68274b\") " pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.049456 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9lg\" (UniqueName: \"kubernetes.io/projected/b0adc6af-a11b-4bad-83bc-e1be5945c05f-kube-api-access-wq9lg\") pod \"apiserver-9ddfb9f55-5rkz7\" (UID: \"b0adc6af-a11b-4bad-83bc-e1be5945c05f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.068784 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj78b\" (UniqueName: \"kubernetes.io/projected/1c05eb02-555a-4906-85e9-3a0eac0cdbc2-kube-api-access-hj78b\") pod \"migrator-866fcbc849-6htrq\" (UID: \"1c05eb02-555a-4906-85e9-3a0eac0cdbc2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.072514 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod908467fd_ce00_441d_a504_dce785c290f2.slice/crio-1ec72e7dee55654f81943618759c94598e6c986035cc0328e68ad7855677a409 WatchSource:0}: Error finding container 1ec72e7dee55654f81943618759c94598e6c986035cc0328e68ad7855677a409: Status 404 returned error can't find the container with id 1ec72e7dee55654f81943618759c94598e6c986035cc0328e68ad7855677a409 Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.072657 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-tx992"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.078129 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.081525 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kwz\" (UniqueName: \"kubernetes.io/projected/b42f0915-c83b-4abd-a4ba-144cd754c9a6-kube-api-access-d5kwz\") pod \"packageserver-7d4fc7d867-mglqx\" (UID: \"b42f0915-c83b-4abd-a4ba-144cd754c9a6\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.104974 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1331c14-9b1f-4b01-85fa-e2a4fafd3da6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-jd2l4\" (UID: \"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.109172 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.121375 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.126452 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dgmvt\" (UID: \"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.127219 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22f707dd_e3f1_40e4_bc80_72d0d0ccf8ad.slice/crio-8071ad9d9c67f9711f2eabaa14d90d4ae5088439702e63b9b46ccfdc745da273 WatchSource:0}: Error finding container 8071ad9d9c67f9711f2eabaa14d90d4ae5088439702e63b9b46ccfdc745da273: Status 404 returned error can't find the container with id 8071ad9d9c67f9711f2eabaa14d90d4ae5088439702e63b9b46ccfdc745da273 Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.139456 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a6214c5_1554_43a3_82d3_65532d7a79a4.slice/crio-d454243546f4f1c8603d31a38c01490013f1b7a5d1fd57116bc78c63eae8ee20 WatchSource:0}: Error finding container d454243546f4f1c8603d31a38c01490013f1b7a5d1fd57116bc78c63eae8ee20: Status 404 returned error can't find the container with id d454243546f4f1c8603d31a38c01490013f1b7a5d1fd57116bc78c63eae8ee20 Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.146126 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtcg\" (UniqueName: \"kubernetes.io/projected/42fe2705-f9ee-4e26-8e56-2730ba8f6196-kube-api-access-wbtcg\") pod \"marketplace-operator-547dbd544d-k9645\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.159339 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-d772b" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.167552 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzcpr\" (UniqueName: \"kubernetes.io/projected/fc5fc2fd-0910-4e85-a543-060ec0dff17a-kube-api-access-mzcpr\") pod \"kube-storage-version-migrator-operator-565b79b866-6xptq\" (UID: \"fc5fc2fd-0910-4e85-a543-060ec0dff17a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.174165 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-52tsw"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.175501 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.180457 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crzr\" (UniqueName: \"kubernetes.io/projected/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-kube-api-access-2crzr\") pod \"collect-profiles-29421495-49k9q\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.185251 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.198007 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-42qdh"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.204671 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grdw9\" (UniqueName: \"kubernetes.io/projected/8699f161-ee9b-4da7-9074-6af031d37b61-kube-api-access-grdw9\") pod \"multus-admission-controller-69db94689b-nqzwb\" (UID: \"8699f161-ee9b-4da7-9074-6af031d37b61\") " pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.208530 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-8sdgn"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.218778 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.223659 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclzk\" (UniqueName: \"kubernetes.io/projected/327bd87a-2375-4b04-b49f-173966bba4fc-kube-api-access-dclzk\") pod \"router-default-68cf44c8b8-xd87d\" (UID: \"327bd87a-2375-4b04-b49f-173966bba4fc\") " pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.228157 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.246343 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.247356 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8rgq\" (UniqueName: \"kubernetes.io/projected/99f99897-fb5a-4ed4-8687-68b4a8ac5c2b-kube-api-access-g8rgq\") pod \"machine-config-controller-f9cdd68f7-dfvj6\" (UID: \"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.255053 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.263366 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9s4l\" (UniqueName: \"kubernetes.io/projected/7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9-kube-api-access-d9s4l\") pod \"authentication-operator-7f5c659b84-gj79t\" (UID: \"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.265436 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.284720 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.286890 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3706088_2315_4b35_852b_1327e8a99d18.slice/crio-6d2193b3d694a040c83214948fe8d6e282c995d7ccefe92ea227fc06e025177e WatchSource:0}: Error finding container 6d2193b3d694a040c83214948fe8d6e282c995d7ccefe92ea227fc06e025177e: Status 404 returned error can't find the container with id 6d2193b3d694a040c83214948fe8d6e282c995d7ccefe92ea227fc06e025177e Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.290401 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncxfj\" (UniqueName: \"kubernetes.io/projected/62cc8950-967d-4052-a42a-8b4223a1f9ab-kube-api-access-ncxfj\") pod \"catalog-operator-75ff9f647d-k254x\" (UID: \"62cc8950-967d-4052-a42a-8b4223a1f9ab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.302028 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n86ff\" (UniqueName: \"kubernetes.io/projected/ee06afad-5b97-43f1-b61b-cd275363814c-kube-api-access-n86ff\") pod \"service-ca-operator-5b9c976747-dqds9\" (UID: \"ee06afad-5b97-43f1-b61b-cd275363814c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.319116 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.323672 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl6fp\" (UniqueName: \"kubernetes.io/projected/a24e13f1-113b-4044-842b-a13d6d620655-kube-api-access-nl6fp\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zttss\" (UID: \"a24e13f1-113b-4044-842b-a13d6d620655\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.324872 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.357332 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.367499 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.369848 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-tx992" event={"ID":"22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad","Type":"ContainerStarted","Data":"8071ad9d9c67f9711f2eabaa14d90d4ae5088439702e63b9b46ccfdc745da273"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.376223 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" event={"ID":"4a6214c5-1554-43a3-82d3-65532d7a79a4","Type":"ContainerStarted","Data":"d454243546f4f1c8603d31a38c01490013f1b7a5d1fd57116bc78c63eae8ee20"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.383073 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" event={"ID":"89cfa976-43a8-469f-ba75-7a630ae3e072","Type":"ContainerStarted","Data":"7ede6c98db53a942ec557b21e63e46046103783db19ca2d22feac2c087f1253d"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.384604 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.384890 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.386095 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.390796 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-8sdgn" event={"ID":"f3706088-2315-4b35-852b-1327e8a99d18","Type":"ContainerStarted","Data":"6d2193b3d694a040c83214948fe8d6e282c995d7ccefe92ea227fc06e025177e"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.393296 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.393991 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" event={"ID":"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9","Type":"ContainerStarted","Data":"e769b5001068f4eab07107e05490ac8112a803092f3f41f9317ba25985949c4e"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.394028 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" event={"ID":"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9","Type":"ContainerStarted","Data":"9326e76578c1ab9b08dc935d2aa612b70b469ae3b899dcc48f0a9ef35f1f9bb7"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.399750 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" event={"ID":"8d229a45-586a-4cf8-9e25-fd80224017fb","Type":"ContainerStarted","Data":"db05870c1bacbd27a7f683108966278bc350f8d7bd2b7f80b46c102248a92d7d"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.400309 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.400997 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" event={"ID":"908467fd-ce00-441d-a504-dce785c290f2","Type":"ContainerStarted","Data":"1ec72e7dee55654f81943618759c94598e6c986035cc0328e68ad7855677a409"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.403726 5116 generic.go:358] "Generic (PLEG): container finished" podID="5e9a5cea-a4d8-4f7e-8ad4-708b692c7372" containerID="d30fe0d81f5cff8acd905576a2efcfe9e5200d006b4cd3ddd6d1be0fc3db5bab" exitCode=0 Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.403802 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" event={"ID":"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372","Type":"ContainerDied","Data":"d30fe0d81f5cff8acd905576a2efcfe9e5200d006b4cd3ddd6d1be0fc3db5bab"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.403824 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" event={"ID":"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372","Type":"ContainerStarted","Data":"2bb9eff6bfc982d539f2b0b7142a89ac3d920898ef387c22a13b2d3f0dca795e"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.403988 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.406441 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" event={"ID":"9ae31159-3efc-4516-830c-cabd140b3a6b","Type":"ContainerStarted","Data":"0700f18be53f019d61cfab38d3c2cfb7db5f9642428deaed30968399d400cf7d"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.407330 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" event={"ID":"89c88c0b-1e12-405a-96ef-49bab04d20f5","Type":"ContainerStarted","Data":"b87311e9ea13ee70d48165224bbacbfb36846270e4d27fdd4a46e9d31f03b481"} Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.426276 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.429488 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.429879 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.448471 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.490871 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfc8k\" (UniqueName: \"kubernetes.io/projected/1e0fc80f-04b1-4b8a-9c52-c615211955b0-kube-api-access-dfc8k\") pod \"olm-operator-5cdf44d969-xphvn\" (UID: \"1e0fc80f-04b1-4b8a-9c52-c615211955b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.491781 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.495641 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.508043 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.565059 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.574202 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.593827 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod327bd87a_2375_4b04_b49f_173966bba4fc.slice/crio-dbdc555562146e414379ab1e5a1b3f94e988ebcac24b34a3483a799f0d4853c2 WatchSource:0}: Error finding container dbdc555562146e414379ab1e5a1b3f94e988ebcac24b34a3483a799f0d4853c2: Status 404 returned error can't find the container with id dbdc555562146e414379ab1e5a1b3f94e988ebcac24b34a3483a799f0d4853c2 Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.593832 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-bound-sa-token\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.593891 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423b571-b9cd-4400-b95c-4d2f6073413e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.595083 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwc2c\" (UniqueName: \"kubernetes.io/projected/8452c171-2f05-4155-8af3-03424f469d98-kube-api-access-dwc2c\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.596449 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-socket-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.596539 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27abbd50-923e-4150-8073-95a6df5ff47e-config-volume\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.597110 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-registration-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.597168 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-ca-trust-extracted\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.597388 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-certificates\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.597570 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-mountpoint-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.597774 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4825a732-6888-45c1-845a-07b125e37de7-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598091 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-plugins-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598172 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8452c171-2f05-4155-8af3-03424f469d98-certs\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598244 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szst\" (UniqueName: \"kubernetes.io/projected/27abbd50-923e-4150-8073-95a6df5ff47e-kube-api-access-7szst\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598321 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqdc\" (UniqueName: \"kubernetes.io/projected/313d594f-05f3-4875-8cd4-1bab1042ba29-kube-api-access-spqdc\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598350 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27abbd50-923e-4150-8073-95a6df5ff47e-tmp-dir\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598417 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8gh\" (UniqueName: \"kubernetes.io/projected/5aefd7ee-e68e-4038-a035-dd6b44194e2d-kube-api-access-fc8gh\") pod \"ingress-canary-k8jnm\" (UID: \"5aefd7ee-e68e-4038-a035-dd6b44194e2d\") " pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598440 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-csi-data-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598604 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-trusted-ca\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598688 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598774 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a423b571-b9cd-4400-b95c-4d2f6073413e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598844 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-ready\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.598913 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-installation-pull-secrets\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.599077 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a423b571-b9cd-4400-b95c-4d2f6073413e-serving-cert\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.599230 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5aefd7ee-e68e-4038-a035-dd6b44194e2d-cert\") pod \"ingress-canary-k8jnm\" (UID: \"5aefd7ee-e68e-4038-a035-dd6b44194e2d\") " pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.599714 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27abbd50-923e-4150-8073-95a6df5ff47e-metrics-tls\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.599937 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600018 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4825a732-6888-45c1-845a-07b125e37de7-images\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600287 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8452c171-2f05-4155-8af3-03424f469d98-node-bootstrap-token\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600340 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99pj\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-kube-api-access-x99pj\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600369 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7bn\" (UniqueName: \"kubernetes.io/projected/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-kube-api-access-2b7bn\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600549 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600619 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4825a732-6888-45c1-845a-07b125e37de7-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600661 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a423b571-b9cd-4400-b95c-4d2f6073413e-config\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600788 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-tls\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.600860 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcx2g\" (UniqueName: \"kubernetes.io/projected/4825a732-6888-45c1-845a-07b125e37de7-kube-api-access-mcx2g\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: E1209 14:16:16.602496 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.102482038 +0000 UTC m=+115.624226826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.633473 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-d772b"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705530 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705800 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-trusted-ca\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705829 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705853 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a423b571-b9cd-4400-b95c-4d2f6073413e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705876 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-ready\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705894 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-installation-pull-secrets\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705921 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a423b571-b9cd-4400-b95c-4d2f6073413e-serving-cert\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.705943 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5aefd7ee-e68e-4038-a035-dd6b44194e2d-cert\") pod \"ingress-canary-k8jnm\" (UID: \"5aefd7ee-e68e-4038-a035-dd6b44194e2d\") " pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706001 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27abbd50-923e-4150-8073-95a6df5ff47e-metrics-tls\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: E1209 14:16:16.706060 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.206013513 +0000 UTC m=+115.727758321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706110 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4825a732-6888-45c1-845a-07b125e37de7-images\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706215 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8452c171-2f05-4155-8af3-03424f469d98-node-bootstrap-token\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706261 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x99pj\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-kube-api-access-x99pj\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706300 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7bn\" (UniqueName: \"kubernetes.io/projected/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-kube-api-access-2b7bn\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706378 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706426 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4825a732-6888-45c1-845a-07b125e37de7-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706536 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a423b571-b9cd-4400-b95c-4d2f6073413e-config\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706595 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-tls\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706635 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mcx2g\" (UniqueName: \"kubernetes.io/projected/4825a732-6888-45c1-845a-07b125e37de7-kube-api-access-mcx2g\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706724 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-bound-sa-token\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706748 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423b571-b9cd-4400-b95c-4d2f6073413e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706808 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwc2c\" (UniqueName: \"kubernetes.io/projected/8452c171-2f05-4155-8af3-03424f469d98-kube-api-access-dwc2c\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706870 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-socket-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706912 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27abbd50-923e-4150-8073-95a6df5ff47e-config-volume\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.706973 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-registration-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707007 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-ca-trust-extracted\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707050 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-certificates\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707079 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-mountpoint-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707102 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4825a732-6888-45c1-845a-07b125e37de7-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707162 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-plugins-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707197 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8452c171-2f05-4155-8af3-03424f469d98-certs\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707231 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7szst\" (UniqueName: \"kubernetes.io/projected/27abbd50-923e-4150-8073-95a6df5ff47e-kube-api-access-7szst\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707274 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spqdc\" (UniqueName: \"kubernetes.io/projected/313d594f-05f3-4875-8cd4-1bab1042ba29-kube-api-access-spqdc\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707299 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27abbd50-923e-4150-8073-95a6df5ff47e-tmp-dir\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707336 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8gh\" (UniqueName: \"kubernetes.io/projected/5aefd7ee-e68e-4038-a035-dd6b44194e2d-kube-api-access-fc8gh\") pod \"ingress-canary-k8jnm\" (UID: \"5aefd7ee-e68e-4038-a035-dd6b44194e2d\") " pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707405 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-csi-data-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.707656 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-csi-data-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.708384 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4825a732-6888-45c1-845a-07b125e37de7-images\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.708630 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.708772 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-ready\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.713331 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a423b571-b9cd-4400-b95c-4d2f6073413e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.717534 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-mountpoint-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.717600 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-registration-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.717645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-plugins-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.718156 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/313d594f-05f3-4875-8cd4-1bab1042ba29-socket-dir\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.718588 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27abbd50-923e-4150-8073-95a6df5ff47e-config-volume\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.718636 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4825a732-6888-45c1-845a-07b125e37de7-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.719201 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27abbd50-923e-4150-8073-95a6df5ff47e-tmp-dir\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.719632 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27abbd50-923e-4150-8073-95a6df5ff47e-metrics-tls\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.719993 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-ca-trust-extracted\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.720554 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-certificates\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.720919 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a423b571-b9cd-4400-b95c-4d2f6073413e-serving-cert\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.721369 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a423b571-b9cd-4400-b95c-4d2f6073413e-config\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.728381 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-installation-pull-secrets\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.729662 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.730409 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4825a732-6888-45c1-845a-07b125e37de7-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.731046 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-trusted-ca\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.734349 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8452c171-2f05-4155-8af3-03424f469d98-node-bootstrap-token\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.734739 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5aefd7ee-e68e-4038-a035-dd6b44194e2d-cert\") pod \"ingress-canary-k8jnm\" (UID: \"5aefd7ee-e68e-4038-a035-dd6b44194e2d\") " pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.738827 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8452c171-2f05-4155-8af3-03424f469d98-certs\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.741765 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7bn\" (UniqueName: \"kubernetes.io/projected/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-kube-api-access-2b7bn\") pod \"cni-sysctl-allowlist-ds-9ctzk\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.767770 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-tls\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.784307 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x99pj\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-kube-api-access-x99pj\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.785611 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.791903 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-bound-sa-token\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.795559 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.795766 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.795866 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-zssql"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.804720 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423b571-b9cd-4400-b95c-4d2f6073413e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sqftg\" (UID: \"a423b571-b9cd-4400-b95c-4d2f6073413e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.809663 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:16 crc kubenswrapper[5116]: E1209 14:16:16.810517 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.310504034 +0000 UTC m=+115.832248832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.821873 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwc2c\" (UniqueName: \"kubernetes.io/projected/8452c171-2f05-4155-8af3-03424f469d98-kube-api-access-dwc2c\") pod \"machine-config-server-6wc7f\" (UID: \"8452c171-2f05-4155-8af3-03424f469d98\") " pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.843553 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k9645"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.844244 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.848043 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.848944 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szst\" (UniqueName: \"kubernetes.io/projected/27abbd50-923e-4150-8073-95a6df5ff47e-kube-api-access-7szst\") pod \"dns-default-fbkc6\" (UID: \"27abbd50-923e-4150-8073-95a6df5ff47e\") " pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.868944 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.873590 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqdc\" (UniqueName: \"kubernetes.io/projected/313d594f-05f3-4875-8cd4-1bab1042ba29-kube-api-access-spqdc\") pod \"csi-hostpathplugin-2gk88\" (UID: \"313d594f-05f3-4875-8cd4-1bab1042ba29\") " pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.875132 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt"] Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.882813 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03bd8401_ab8b_4d8c_a1a8_d9341a7becf9.slice/crio-8bce96b288415ccdd8dc7f9685f5156c72e7b91cc844c18db212b78c1e309f16 WatchSource:0}: Error finding container 8bce96b288415ccdd8dc7f9685f5156c72e7b91cc844c18db212b78c1e309f16: Status 404 returned error can't find the container with id 8bce96b288415ccdd8dc7f9685f5156c72e7b91cc844c18db212b78c1e309f16 Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.887088 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.887942 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8gh\" (UniqueName: \"kubernetes.io/projected/5aefd7ee-e68e-4038-a035-dd6b44194e2d-kube-api-access-fc8gh\") pod \"ingress-canary-k8jnm\" (UID: \"5aefd7ee-e68e-4038-a035-dd6b44194e2d\") " pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: W1209 14:16:16.901979 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a9d4e2_1d1e_4422_ae4d_3c9b25e3aada.slice/crio-c040691c93bd6de501e0625bdc68c7c5a24f4449c0b2eb8673d84b66e467f728 WatchSource:0}: Error finding container c040691c93bd6de501e0625bdc68c7c5a24f4449c0b2eb8673d84b66e467f728: Status 404 returned error can't find the container with id c040691c93bd6de501e0625bdc68c7c5a24f4449c0b2eb8673d84b66e467f728 Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.907344 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcx2g\" (UniqueName: \"kubernetes.io/projected/4825a732-6888-45c1-845a-07b125e37de7-kube-api-access-mcx2g\") pod \"machine-config-operator-67c9d58cbb-wq89m\" (UID: \"4825a732-6888-45c1-845a-07b125e37de7\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.910700 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:16 crc kubenswrapper[5116]: E1209 14:16:16.911107 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.41107805 +0000 UTC m=+115.932822848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.915327 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.916447 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5rkz7"] Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.925375 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k8jnm" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.934819 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.937202 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6wc7f" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.963922 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" Dec 09 14:16:16 crc kubenswrapper[5116]: I1209 14:16:16.987203 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.014864 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.015450 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.515433397 +0000 UTC m=+116.037178195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: W1209 14:16:17.086031 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b3601d_79fa_43bf_ac32_ff4af19c5a3f.slice/crio-f3d52d63040e037c991fd9f18a959d6e632db593596c55c0a96e5f8f17c7d565 WatchSource:0}: Error finding container f3d52d63040e037c991fd9f18a959d6e632db593596c55c0a96e5f8f17c7d565: Status 404 returned error can't find the container with id f3d52d63040e037c991fd9f18a959d6e632db593596c55c0a96e5f8f17c7d565 Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.101125 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.118756 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.119449 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.619425454 +0000 UTC m=+116.141170262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.169201 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.179522 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.194795 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.232680 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.233092 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.733072138 +0000 UTC m=+116.254816936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.336082 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.336860 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.836837279 +0000 UTC m=+116.358582077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.356676 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.381104 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-nqzwb"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.417267 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" event={"ID":"f436eb65-e5b7-4b61-9072-699f6c071102","Type":"ContainerStarted","Data":"72ed29edb1af17199fe98c13f1835e90b35a0bd8bcfd693427faac515abb900b"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.424618 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-8sdgn" event={"ID":"f3706088-2315-4b35-852b-1327e8a99d18","Type":"ContainerStarted","Data":"c967ebfba0072da2846f4f9cf44aab2581267458dd793d4566dcd90a0a199e2c"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.432053 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" event={"ID":"f4a319d1-e0a4-4b2d-a40a-b5761f1b8cf9","Type":"ContainerStarted","Data":"d0df82f90ed98d4d6a0b2c2868a7f16ebad8f1e7ce3fd1c5020d92409a140a5c"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.438262 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" event={"ID":"a24e13f1-113b-4044-842b-a13d6d620655","Type":"ContainerStarted","Data":"473f0be9db1719b1c2616718c9781b991fe66dad13acc15a35b45e08d3f71073"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.439933 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.440581 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:17.940564749 +0000 UTC m=+116.462309547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.441025 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" event={"ID":"327bd87a-2375-4b04-b49f-173966bba4fc","Type":"ContainerStarted","Data":"dbdc555562146e414379ab1e5a1b3f94e988ebcac24b34a3483a799f0d4853c2"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.443858 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" event={"ID":"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada","Type":"ContainerStarted","Data":"c040691c93bd6de501e0625bdc68c7c5a24f4449c0b2eb8673d84b66e467f728"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.450018 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" event={"ID":"f3316637-1d62-4c98-a599-437d5d706de7","Type":"ContainerStarted","Data":"9063445b3e483527cd42ace9ab46a726627523aa3770ee435ea15053c87389d1"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.458038 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" event={"ID":"16b3601d-79fa-43bf-ac32-ff4af19c5a3f","Type":"ContainerStarted","Data":"f3d52d63040e037c991fd9f18a959d6e632db593596c55c0a96e5f8f17c7d565"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.461360 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" event={"ID":"42fe2705-f9ee-4e26-8e56-2730ba8f6196","Type":"ContainerStarted","Data":"02ed87818af791432e1275ba9686304ac82f9e09666c5f2a431215b390376b1d"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.468469 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-d772b" event={"ID":"10858c0e-9efb-4468-bfec-3ef3aa1a6579","Type":"ContainerStarted","Data":"8981f82e0e89b68f6cf624b4b05443d820a1d4acc2145425334e48e17bde1e65"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.470484 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-zssql" event={"ID":"fdd55bc4-7b67-4fca-9873-1198fb68274b","Type":"ContainerStarted","Data":"e676e0f3655dae3f5d843d7b6ed796c1b5c86c2fcb23f2d2205db83e469f9487"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.478199 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" event={"ID":"627f3889-5fe0-4a44-9def-9363af7a5979","Type":"ContainerStarted","Data":"74b0b4c3cc331dc204f27b4008a6d59dd903f9b118798853c18ccff3747c42f1"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.482483 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" event={"ID":"e396c726-e045-447e-9420-93f09255e695","Type":"ContainerStarted","Data":"74ce6f98ee0cfbab385506fad1e4d0baa2b16c45ad5cff1c63eb41b65dc823b6"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.500645 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" event={"ID":"5e9a5cea-a4d8-4f7e-8ad4-708b692c7372","Type":"ContainerStarted","Data":"960ff096c882f912c6b77a5fc848ee716a66ebd7255d43a674e7e52d25243131"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.501378 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.517277 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" event={"ID":"89c88c0b-1e12-405a-96ef-49bab04d20f5","Type":"ContainerStarted","Data":"e57a752dd9615857910542b54f42ff5ebb8a7e61b4d0fec2c2efb04523eca158"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.524417 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-tx992" event={"ID":"22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad","Type":"ContainerStarted","Data":"b126a7981322c9c33a0887cef553ba29d2787ab5d2365805932707bda19ad41c"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.524703 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.528344 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-tx992 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.528387 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-tx992" podUID="22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.536348 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" event={"ID":"4a6214c5-1554-43a3-82d3-65532d7a79a4","Type":"ContainerStarted","Data":"309129def5e3a28bc2f0c34173fbf5e6d30f1829ce763594e60dc7527651488b"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.537259 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.549312 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" event={"ID":"89cfa976-43a8-469f-ba75-7a630ae3e072","Type":"ContainerStarted","Data":"8eb46a7f7ad52812678aa776c4db0b2322c7e72da71a2ce2d699a7e4931adc02"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.550433 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.551922 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.051900142 +0000 UTC m=+116.573644940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.559938 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" event={"ID":"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae","Type":"ContainerStarted","Data":"51a703e91544bccd044896cef3e42d6a4b9932e35511c439a71a4433c1aad066"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.577001 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" event={"ID":"8d229a45-586a-4cf8-9e25-fd80224017fb","Type":"ContainerStarted","Data":"11670b51f7c7c8dfa6608af9691b05b524ebdb45753d176e1142f595d081ab89"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.595673 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" event={"ID":"908467fd-ce00-441d-a504-dce785c290f2","Type":"ContainerStarted","Data":"9e0910e76765f7d21de8f4d19dfb289fdb61bfede2a49fe925a8df441cc2a2db"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.596122 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.600840 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" event={"ID":"b0adc6af-a11b-4bad-83bc-e1be5945c05f","Type":"ContainerStarted","Data":"c4627ba98126b84cc3b20082a038a160b287ae843e86bab0683fd8f1c329d5e2"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.614608 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" event={"ID":"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9","Type":"ContainerStarted","Data":"10f11336e7624b1f29ca7c5d318dca0b89d62e2059697246089b86127ec46aa6"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.623846 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.630126 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" event={"ID":"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6","Type":"ContainerStarted","Data":"ea5d448a9924508cf0c27c8d00f6d85584b8664b6e54f7195e52a2343f7bdc5a"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.642840 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" event={"ID":"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9","Type":"ContainerStarted","Data":"8bce96b288415ccdd8dc7f9685f5156c72e7b91cc844c18db212b78c1e309f16"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.650429 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" event={"ID":"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb","Type":"ContainerStarted","Data":"4caaf1505348a20a9204a5f039c8616b8c911db7f3848ad5194ebf6c664ea11b"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.660220 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" event={"ID":"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2","Type":"ContainerStarted","Data":"28e2abb7c3dba159b3c7c3d7ba62a2bbc58d67130c07adf80419b81e2ab4a696"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.660923 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.663609 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.163591514 +0000 UTC m=+116.685336312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.670106 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.672996 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" event={"ID":"b42f0915-c83b-4abd-a4ba-144cd754c9a6","Type":"ContainerStarted","Data":"745738aeb4105c8d8ca47b773602523b78bb83792868a0b5ae169d9655bfa9e5"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.681986 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" event={"ID":"1c05eb02-555a-4906-85e9-3a0eac0cdbc2","Type":"ContainerStarted","Data":"7caf26efe54e27bf468209638c1062ac7db5034f67c1a857120bae88c62441ca"} Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.683062 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.702030 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.745535 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2gk88"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.776585 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.777190 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.277160756 +0000 UTC m=+116.798905544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.791051 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fbkc6"] Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.881458 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.882210 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.382197681 +0000 UTC m=+116.903942479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:17 crc kubenswrapper[5116]: I1209 14:16:17.982677 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:17 crc kubenswrapper[5116]: E1209 14:16:17.983637 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.48361675 +0000 UTC m=+117.005361548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.082746 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg"] Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.089072 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.089489 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.589475437 +0000 UTC m=+117.111220235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.096005 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.137641 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k8jnm"] Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.191026 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.191804 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.691775119 +0000 UTC m=+117.213519927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.278078 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44970: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.295761 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.296087 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.796075844 +0000 UTC m=+117.317820642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.386522 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44984: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.397702 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.398130 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:18.898113979 +0000 UTC m=+117.419858777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.484369 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kdm79" podStartSLOduration=96.484352414 podStartE2EDuration="1m36.484352414s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.439130951 +0000 UTC m=+116.960875749" watchObservedRunningTime="2025-12-09 14:16:18.484352414 +0000 UTC m=+117.006097212" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.486480 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44998: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.487183 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m"] Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.504397 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.504752 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.004735756 +0000 UTC m=+117.526480554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.525650 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" podStartSLOduration=96.525624682 podStartE2EDuration="1m36.525624682s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.501777408 +0000 UTC m=+117.023522206" watchObservedRunningTime="2025-12-09 14:16:18.525624682 +0000 UTC m=+117.047369530" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.527163 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" podStartSLOduration=96.527154933 podStartE2EDuration="1m36.527154933s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.525417487 +0000 UTC m=+117.047162285" watchObservedRunningTime="2025-12-09 14:16:18.527154933 +0000 UTC m=+117.048899731" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.537983 5116 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-g2vff container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.538049 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" podUID="4a6214c5-1554-43a3-82d3-65532d7a79a4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.556086 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-tx992" podStartSLOduration=96.556070252 podStartE2EDuration="1m36.556070252s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.554290195 +0000 UTC m=+117.076035013" watchObservedRunningTime="2025-12-09 14:16:18.556070252 +0000 UTC m=+117.077815040" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.581975 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45002: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.609694 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-8sdgn" podStartSLOduration=96.609677939 podStartE2EDuration="1m36.609677939s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.609554706 +0000 UTC m=+117.131299554" watchObservedRunningTime="2025-12-09 14:16:18.609677939 +0000 UTC m=+117.131422737" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.610118 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.610479 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.11046446 +0000 UTC m=+117.632209258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.649381 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-9qk2k" podStartSLOduration=96.649365185 podStartE2EDuration="1m36.649365185s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.645544983 +0000 UTC m=+117.167289771" watchObservedRunningTime="2025-12-09 14:16:18.649365185 +0000 UTC m=+117.171109983" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.694494 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" podStartSLOduration=96.694472615 podStartE2EDuration="1m36.694472615s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.690117179 +0000 UTC m=+117.211861977" watchObservedRunningTime="2025-12-09 14:16:18.694472615 +0000 UTC m=+117.216217413" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.705236 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45016: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.712300 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.712925 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.212910876 +0000 UTC m=+117.734655674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.729737 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pw8wg" podStartSLOduration=96.729720233 podStartE2EDuration="1m36.729720233s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.726387574 +0000 UTC m=+117.248132372" watchObservedRunningTime="2025-12-09 14:16:18.729720233 +0000 UTC m=+117.251465031" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.795436 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45028: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.814554 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.814699 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.314667714 +0000 UTC m=+117.836412512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.814926 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.815418 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.315407983 +0000 UTC m=+117.837152781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.859175 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" event={"ID":"7bd4aedc-dcf3-4ea6-9296-7e70baf5d3d9","Type":"ContainerStarted","Data":"e1bae944d2678fe8e627f78259a7165a271dd5299a5de397086778afdc2b6866"} Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.879080 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" event={"ID":"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2","Type":"ContainerStarted","Data":"5383d2819e812ea3bf8e733f949d000d14f334b84b7920e805d6f22b8ee73488"} Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.880425 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.886377 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45040: no serving certificate available for the kubelet" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.894068 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" event={"ID":"1c05eb02-555a-4906-85e9-3a0eac0cdbc2","Type":"ContainerStarted","Data":"9875f8f892a063a2b554b76f475904cbb729dd759c1b08fa85d8e8f131c7c0a7"} Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.894987 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-gj79t" podStartSLOduration=95.89497301 podStartE2EDuration="1m35.89497301s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.893692256 +0000 UTC m=+117.415437054" watchObservedRunningTime="2025-12-09 14:16:18.89497301 +0000 UTC m=+117.416717808" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.906968 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" event={"ID":"f436eb65-e5b7-4b61-9072-699f6c071102","Type":"ContainerStarted","Data":"428022aef016c942bc8b7d29a1cabe20f21651a38c241d2262b331de5a246c7e"} Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.915933 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:18 crc kubenswrapper[5116]: E1209 14:16:18.916394 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.41636981 +0000 UTC m=+117.938114598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.925588 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" podStartSLOduration=95.925571895 podStartE2EDuration="1m35.925571895s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.923543341 +0000 UTC m=+117.445288149" watchObservedRunningTime="2025-12-09 14:16:18.925571895 +0000 UTC m=+117.447316693" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.954263 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gj2tv" podStartSLOduration=96.954248628 podStartE2EDuration="1m36.954248628s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.953474357 +0000 UTC m=+117.475219155" watchObservedRunningTime="2025-12-09 14:16:18.954248628 +0000 UTC m=+117.475993426" Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.961238 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" event={"ID":"f3316637-1d62-4c98-a599-437d5d706de7","Type":"ContainerStarted","Data":"966237445fa446e8cfc620e5fbd28b81c1a929941af155df29f543fd5f3e73ab"} Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.976533 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k8jnm" event={"ID":"5aefd7ee-e68e-4038-a035-dd6b44194e2d","Type":"ContainerStarted","Data":"e07c804f9e25bad949b55d579935917c423d4a42b5ec9c171446badd8ba6e882"} Dec 09 14:16:18 crc kubenswrapper[5116]: I1209 14:16:18.994826 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" event={"ID":"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7","Type":"ContainerStarted","Data":"6ca369308b706e2468cf3bffa857dabcf2c7913ed81e541d5f05c684f44e8920"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.008195 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6wc7f" event={"ID":"8452c171-2f05-4155-8af3-03424f469d98","Type":"ContainerStarted","Data":"3902a7cc9ef2370ea237335f0deea9ed37601f13e95791a37a911d8309271b1a"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.009777 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" event={"ID":"42fe2705-f9ee-4e26-8e56-2730ba8f6196","Type":"ContainerStarted","Data":"5070b77cfb97d30c9e7ad08cd695357d30e39f054753020b5a144725d6608a53"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.010905 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.018178 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.018550 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.518533558 +0000 UTC m=+118.040278356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.035114 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-k9645 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.035183 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.049796 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" podStartSLOduration=96.04978057 podStartE2EDuration="1m36.04978057s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.049467902 +0000 UTC m=+117.571212700" watchObservedRunningTime="2025-12-09 14:16:19.04978057 +0000 UTC m=+117.571525368" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.050906 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dnlz8" podStartSLOduration=97.05089822 podStartE2EDuration="1m37.05089822s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:18.991075838 +0000 UTC m=+117.512820636" watchObservedRunningTime="2025-12-09 14:16:19.05089822 +0000 UTC m=+117.572643018" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.076019 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-d772b" event={"ID":"10858c0e-9efb-4468-bfec-3ef3aa1a6579","Type":"ContainerStarted","Data":"ffd9c03b5f9dfefc73fb8c673d8ddb7705dce1c6bef2595685990cbc4d508d81"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.102011 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45044: no serving certificate available for the kubelet" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.110694 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-d772b" podStartSLOduration=96.110676559 podStartE2EDuration="1m36.110676559s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.109379405 +0000 UTC m=+117.631124203" watchObservedRunningTime="2025-12-09 14:16:19.110676559 +0000 UTC m=+117.632421357" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.122579 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.123293 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.623262004 +0000 UTC m=+118.145006812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.125049 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-zssql" event={"ID":"fdd55bc4-7b67-4fca-9873-1198fb68274b","Type":"ContainerStarted","Data":"9bf80e97974f3c3d9f269a615cd394a4d7f5ed39f2849d6ca324cf1fa1b25bcd"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.125971 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.142904 5116 patch_prober.go:28] interesting pod/console-operator-67c89758df-zssql container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.142992 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-zssql" podUID="fdd55bc4-7b67-4fca-9873-1198fb68274b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.163165 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" event={"ID":"627f3889-5fe0-4a44-9def-9363af7a5979","Type":"ContainerStarted","Data":"575b18b5ee2f3d6069e5eceff251e0581351df5b03d696627fc3b5ee2968d312"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.174463 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" event={"ID":"e396c726-e045-447e-9420-93f09255e695","Type":"ContainerStarted","Data":"760d15a1aff9dba2a1ea61b9dd5d4f0d2cd5228348de4cc0b9ff9dc3b75b1548"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.175726 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" event={"ID":"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b","Type":"ContainerStarted","Data":"c44b1fb80a05c01d9e2b824cdb4c2031e33844f582571ba34fe1732e35353eca"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.175771 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" event={"ID":"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b","Type":"ContainerStarted","Data":"8a5ffe91f14b6e8b9cbead188d042494295d7ccda569474ad1846efca7f2928c"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.177617 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" event={"ID":"89c88c0b-1e12-405a-96ef-49bab04d20f5","Type":"ContainerStarted","Data":"015eeea4a8d4fd4de411e3c85dd0ac358ff7539597381cc49985eacab8fa4107"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.225447 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.226062 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.726050389 +0000 UTC m=+118.247795187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.252152 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" event={"ID":"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9","Type":"ContainerStarted","Data":"a039818d60e65a5d6ce30d359b4d6cea9be3581af73e13d9efc1f36fb1c65f26"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.256357 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" event={"ID":"9ae31159-3efc-4516-830c-cabd140b3a6b","Type":"ContainerStarted","Data":"fcd2b35d3dc75446008de4e9e26f7ac7f72be3c280d259bda423ed3c13a20ae8"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.259553 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" event={"ID":"b42f0915-c83b-4abd-a4ba-144cd754c9a6","Type":"ContainerStarted","Data":"44d4092bf9040ac94cae56245efdc532c5ecb0d3acc825e1798dcb0d1c67975e"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.266583 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" event={"ID":"fc5fc2fd-0910-4e85-a543-060ec0dff17a","Type":"ContainerStarted","Data":"3bb99c95ae40f3297f8adc823d7dcd429e995230d147be64902147d91c6982e6"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.267768 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" event={"ID":"313d594f-05f3-4875-8cd4-1bab1042ba29","Type":"ContainerStarted","Data":"acde97d617c5c51fdbb11e40389bc0b61f9c897eca0933fb2f77de5da07e50b9"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.269353 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" event={"ID":"ee06afad-5b97-43f1-b61b-cd275363814c","Type":"ContainerStarted","Data":"9de5163873a7321487658f4ddcfa976deb6dc2295ee24a4dc906ef6314587235"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.270930 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" event={"ID":"a24e13f1-113b-4044-842b-a13d6d620655","Type":"ContainerStarted","Data":"17b590bb1e37be17214a7825efe01ad6ef9ab26771edddec76ecb259ec202c28"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.271906 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" event={"ID":"327bd87a-2375-4b04-b49f-173966bba4fc","Type":"ContainerStarted","Data":"2cdb4970a79f6cf91c2262a957a4c0f4e336fa4c0d0374c827c5f8e0075d51e4"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.272746 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" event={"ID":"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada","Type":"ContainerStarted","Data":"51844cc99710752d10ebeba383a3c3be5238a3b7f86032be5d77315665dc717e"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.273359 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" event={"ID":"1e0fc80f-04b1-4b8a-9c52-c615211955b0","Type":"ContainerStarted","Data":"e476d0294c1da80532bd785342092defcae88c4556a4172e0e56ae6860a180d1"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.274011 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" event={"ID":"62cc8950-967d-4052-a42a-8b4223a1f9ab","Type":"ContainerStarted","Data":"86b715fbb018fc69d43e7f5400d63bbc686555ce04a74e3599731fdbb953fb41"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.279250 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" event={"ID":"16b3601d-79fa-43bf-ac32-ff4af19c5a3f","Type":"ContainerStarted","Data":"3a5faa0188d020f14073e78e4c27e765828f6d00f78dfd2aaeec2a7d2b0b53bb"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.289581 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" event={"ID":"8699f161-ee9b-4da7-9074-6af031d37b61","Type":"ContainerStarted","Data":"5e642a66b86e47b1079bd4eb9be91eee8c8ec2f0a2a9b749bed7ae587df5ebfc"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.296444 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fbkc6" event={"ID":"27abbd50-923e-4150-8073-95a6df5ff47e","Type":"ContainerStarted","Data":"cccb6543a8c10e1881509677d1cc9f6e968eb028e666116913813e2bebc6483e"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.307888 5116 generic.go:358] "Generic (PLEG): container finished" podID="bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae" containerID="bc662612a56046f912d9295d06aa337b6f2d72df37a83e3c233586c05e95c114" exitCode=0 Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.307987 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" event={"ID":"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae","Type":"ContainerDied","Data":"bc662612a56046f912d9295d06aa337b6f2d72df37a83e3c233586c05e95c114"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.316483 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" event={"ID":"4825a732-6888-45c1-845a-07b125e37de7","Type":"ContainerStarted","Data":"ec802520e3424c00b77ea7e237547ed448739e3615c360445b89473ea1f202e1"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.319818 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" event={"ID":"a423b571-b9cd-4400-b95c-4d2f6073413e","Type":"ContainerStarted","Data":"90c975bedd1e6dba3acd6157351066899b4349d654a69916263d09a423a4350f"} Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.326871 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.327065 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.827044547 +0000 UTC m=+118.348789345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.327278 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.327927 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.8279186 +0000 UTC m=+118.349663398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.334529 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-zssql" podStartSLOduration=97.334513375 podStartE2EDuration="1m37.334513375s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.16294746 +0000 UTC m=+117.684692258" watchObservedRunningTime="2025-12-09 14:16:19.334513375 +0000 UTC m=+117.856258173" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.430374 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.430672 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.930623833 +0000 UTC m=+118.452368631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.431395 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.433604 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:19.933593272 +0000 UTC m=+118.455338070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.461412 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-tx992 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.461459 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-tx992" podUID="22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.468138 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.507052 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-52tsw" podStartSLOduration=97.507038056 podStartE2EDuration="1m37.507038056s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.506548713 +0000 UTC m=+118.028293511" watchObservedRunningTime="2025-12-09 14:16:19.507038056 +0000 UTC m=+118.028782854" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.507754 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-kmf69" podStartSLOduration=97.507749415 podStartE2EDuration="1m37.507749415s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.480815928 +0000 UTC m=+118.002560726" watchObservedRunningTime="2025-12-09 14:16:19.507749415 +0000 UTC m=+118.029494213" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.533533 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.534740 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.034724393 +0000 UTC m=+118.556469191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.565987 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.633339 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podStartSLOduration=97.633311476 podStartE2EDuration="1m37.633311476s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.602188658 +0000 UTC m=+118.123933456" watchObservedRunningTime="2025-12-09 14:16:19.633311476 +0000 UTC m=+118.155056274" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.641285 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" podStartSLOduration=96.641263458 podStartE2EDuration="1m36.641263458s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.541083372 +0000 UTC m=+118.062828160" watchObservedRunningTime="2025-12-09 14:16:19.641263458 +0000 UTC m=+118.163008256" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.644590 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.658561 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.158542878 +0000 UTC m=+118.680287676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.665248 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zttss" podStartSLOduration=97.665210205 podStartE2EDuration="1m37.665210205s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.648987343 +0000 UTC m=+118.170732141" watchObservedRunningTime="2025-12-09 14:16:19.665210205 +0000 UTC m=+118.186955023" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.740838 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" podStartSLOduration=97.740802217 podStartE2EDuration="1m37.740802217s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.734796367 +0000 UTC m=+118.256541165" watchObservedRunningTime="2025-12-09 14:16:19.740802217 +0000 UTC m=+118.262547015" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.742559 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" podStartSLOduration=79.742550993 podStartE2EDuration="1m19.742550993s" podCreationTimestamp="2025-12-09 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.71046849 +0000 UTC m=+118.232213288" watchObservedRunningTime="2025-12-09 14:16:19.742550993 +0000 UTC m=+118.264295791" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.751763 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.761643 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.762107 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.262086323 +0000 UTC m=+118.783831121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.829212 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45048: no serving certificate available for the kubelet" Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.868465 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.869225 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.369208563 +0000 UTC m=+118.890953361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:19 crc kubenswrapper[5116]: I1209 14:16:19.969547 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:19 crc kubenswrapper[5116]: E1209 14:16:19.970026 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.470002706 +0000 UTC m=+118.991747494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.084747 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.085288 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.585275643 +0000 UTC m=+119.107020441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.186354 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.186704 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.686689472 +0000 UTC m=+119.208434270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.258047 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.263939 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:20 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:20 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:20 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.264106 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.289391 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.289829 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.789817246 +0000 UTC m=+119.311562044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.356726 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" event={"ID":"1d346ddf-4bb9-4f7a-b610-588bf9cf3ebb","Type":"ContainerStarted","Data":"f839c4310031e03300525e62b6caaf9bb4000e90fbcf2378c4f938f906a85d03"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.390512 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.390796 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" event={"ID":"1c05eb02-555a-4906-85e9-3a0eac0cdbc2","Type":"ContainerStarted","Data":"d91ca1949650c969b1ac187b5ce589226b6b6eaa1bbb0b67b8d4da1160836ca0"} Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.391049 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.891034339 +0000 UTC m=+119.412779137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.398298 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-wrw7v" podStartSLOduration=98.398276002 podStartE2EDuration="1m38.398276002s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.386045936 +0000 UTC m=+118.907790734" watchObservedRunningTime="2025-12-09 14:16:20.398276002 +0000 UTC m=+118.920020800" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.430187 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k8jnm" event={"ID":"5aefd7ee-e68e-4038-a035-dd6b44194e2d","Type":"ContainerStarted","Data":"f004c5d756f194940bc1e92c21d9e26524556a920272d16f53294867c1cca658"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.433277 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6htrq" podStartSLOduration=98.433263833 podStartE2EDuration="1m38.433263833s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.432102912 +0000 UTC m=+118.953847710" watchObservedRunningTime="2025-12-09 14:16:20.433263833 +0000 UTC m=+118.955008631" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.468214 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k8jnm" podStartSLOduration=7.468198472 podStartE2EDuration="7.468198472s" podCreationTimestamp="2025-12-09 14:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.467981687 +0000 UTC m=+118.989726485" watchObservedRunningTime="2025-12-09 14:16:20.468198472 +0000 UTC m=+118.989943270" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.479756 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" event={"ID":"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7","Type":"ContainerStarted","Data":"f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.480791 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.484262 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6wc7f" event={"ID":"8452c171-2f05-4155-8af3-03424f469d98","Type":"ContainerStarted","Data":"417a26fa138ea377e112f8e1c76dad379732b9ea1096ff5f8c8e05ef6daf359a"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.491860 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.492134 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:20.992122999 +0000 UTC m=+119.513867797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.507690 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" event={"ID":"e396c726-e045-447e-9420-93f09255e695","Type":"ContainerStarted","Data":"5ce249faaa4ec33b0fcf0edc738fd43eea1f5ef9b37a11440dac871f38e7c655"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.508295 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" podStartSLOduration=8.508280149 podStartE2EDuration="8.508280149s" podCreationTimestamp="2025-12-09 14:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.506783719 +0000 UTC m=+119.028528517" watchObservedRunningTime="2025-12-09 14:16:20.508280149 +0000 UTC m=+119.030024947" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.538618 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6wc7f" podStartSLOduration=7.5385995359999995 podStartE2EDuration="7.538599536s" podCreationTimestamp="2025-12-09 14:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.538211125 +0000 UTC m=+119.059955933" watchObservedRunningTime="2025-12-09 14:16:20.538599536 +0000 UTC m=+119.060344334" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.542159 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" event={"ID":"99f99897-fb5a-4ed4-8687-68b4a8ac5c2b","Type":"ContainerStarted","Data":"503ad1eb9245f81701766347919241643737f63176a2ffabf4ad98611f262e46"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.555112 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.565023 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-km7vh" podStartSLOduration=98.565007848 podStartE2EDuration="1m38.565007848s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.564374232 +0000 UTC m=+119.086119030" watchObservedRunningTime="2025-12-09 14:16:20.565007848 +0000 UTC m=+119.086752656" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.578452 5116 generic.go:358] "Generic (PLEG): container finished" podID="b0adc6af-a11b-4bad-83bc-e1be5945c05f" containerID="093045e34856a2d07aa9e7315438e2029c3ff9f965e140570e4e3428379a9901" exitCode=0 Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.578821 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" event={"ID":"b0adc6af-a11b-4bad-83bc-e1be5945c05f","Type":"ContainerDied","Data":"093045e34856a2d07aa9e7315438e2029c3ff9f965e140570e4e3428379a9901"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.593976 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.594470 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.094439392 +0000 UTC m=+119.616184210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.600926 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dfvj6" podStartSLOduration=98.600908094 podStartE2EDuration="1m38.600908094s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.598732736 +0000 UTC m=+119.120477534" watchObservedRunningTime="2025-12-09 14:16:20.600908094 +0000 UTC m=+119.122652892" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.640309 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" event={"ID":"9ae31159-3efc-4516-830c-cabd140b3a6b","Type":"ContainerStarted","Data":"387d60270fc8d87238240bcf0d9d1d7059e956534f714e4b1956e20c9d6dd61c"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.695428 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.700009 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.19998944 +0000 UTC m=+119.721734228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.703725 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6xptq" event={"ID":"fc5fc2fd-0910-4e85-a543-060ec0dff17a","Type":"ContainerStarted","Data":"6d8f3741d8baed211267a9a95c9d461621e828a81584d588070345c697040ddf"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.752871 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" event={"ID":"ee06afad-5b97-43f1-b61b-cd275363814c","Type":"ContainerStarted","Data":"4b92d57a95f7ffada73f720a713a8f93fbb0d4ba476167771a9f1f709f9a6acc"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.789978 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-42qdh" podStartSLOduration=98.789944824 podStartE2EDuration="1m38.789944824s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.753308089 +0000 UTC m=+119.275052917" watchObservedRunningTime="2025-12-09 14:16:20.789944824 +0000 UTC m=+119.311689622" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.790762 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dqds9" podStartSLOduration=97.790756926 podStartE2EDuration="1m37.790756926s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.787942751 +0000 UTC m=+119.309687549" watchObservedRunningTime="2025-12-09 14:16:20.790756926 +0000 UTC m=+119.312501724" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.796502 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" event={"ID":"39a9d4e2-1d1e-4422-ae4d-3c9b25e3aada","Type":"ContainerStarted","Data":"2b49a54d9174a75cabb3cd1c5aa8e34a9336d897dfd16db3e265dcebae37d5f9"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.798550 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.800354 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.30032475 +0000 UTC m=+119.822069548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.809311 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" event={"ID":"1e0fc80f-04b1-4b8a-9c52-c615211955b0","Type":"ContainerStarted","Data":"cb39f8f703ccc61ddc3914eec257f196868ed59cae8bf455b5884abf5f1be5a2"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.809920 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.816679 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.817049 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.317032035 +0000 UTC m=+119.838776833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.871593 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.872305 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" event={"ID":"62cc8950-967d-4052-a42a-8b4223a1f9ab","Type":"ContainerStarted","Data":"08fca745cbebd49051cb03f9fc44758e62c84be7dbf860b699f37b3c8e21ad4c"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.873184 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dgmvt" podStartSLOduration=98.873167509 podStartE2EDuration="1m38.873167509s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.831741186 +0000 UTC m=+119.353485994" watchObservedRunningTime="2025-12-09 14:16:20.873167509 +0000 UTC m=+119.394912307" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.873212 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.900024 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" event={"ID":"16b3601d-79fa-43bf-ac32-ff4af19c5a3f","Type":"ContainerStarted","Data":"3bfd95d5465ae8f0dd4898bc7630f6fb4b85c91c2f2114aa0975ca4887139bb7"} Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.900570 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.918384 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.931556 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:20 crc kubenswrapper[5116]: E1209 14:16:20.932681 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.432663762 +0000 UTC m=+119.954408560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:20 crc kubenswrapper[5116]: I1209 14:16:20.938448 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" event={"ID":"8699f161-ee9b-4da7-9074-6af031d37b61","Type":"ContainerStarted","Data":"75f4f5932186a48940601270582182e91b19f784c350e50450863a2021c1add3"} Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:20.993826 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fbkc6" event={"ID":"27abbd50-923e-4150-8073-95a6df5ff47e","Type":"ContainerStarted","Data":"6a17c02dc05c757dc6b3a551d72632fe7b69bdb475dc0c5c4d990ef3bcc0d4e5"} Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:20.993859 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.026449 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-k254x" podStartSLOduration=98.026436617 podStartE2EDuration="1m38.026436617s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:21.025629206 +0000 UTC m=+119.547374004" watchObservedRunningTime="2025-12-09 14:16:21.026436617 +0000 UTC m=+119.548181415" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.027962 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xphvn" podStartSLOduration=98.027944657 podStartE2EDuration="1m38.027944657s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:20.928286545 +0000 UTC m=+119.450031343" watchObservedRunningTime="2025-12-09 14:16:21.027944657 +0000 UTC m=+119.549689445" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.035718 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.037552 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.537538022 +0000 UTC m=+120.059282820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.070127 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" event={"ID":"4825a732-6888-45c1-845a-07b125e37de7","Type":"ContainerStarted","Data":"0d14ea2bfb5ec9fcf81b787e1e3d4baa9ad4456b7d054f978aaae6006b524bb1"} Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.109571 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" event={"ID":"e1331c14-9b1f-4b01-85fa-e2a4fafd3da6","Type":"ContainerStarted","Data":"eedcdfac80065ccf3e60d2d8c20d5b2e692caf97afbdbbc7534e23d6939112d2"} Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.111468 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" podStartSLOduration=98.111452269 podStartE2EDuration="1m38.111452269s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:21.110374151 +0000 UTC m=+119.632118949" watchObservedRunningTime="2025-12-09 14:16:21.111452269 +0000 UTC m=+119.633197067" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.113257 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-tx992 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.113296 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-tx992" podUID="22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.116595 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.118219 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-k9645 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.118275 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.136260 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.137410 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.6373936 +0000 UTC m=+120.159138398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.142146 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-x4svw" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.144074 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-zssql" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.171080 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-mglqx" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.185443 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45050: no serving certificate available for the kubelet" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.203369 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9ctzk"] Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.221327 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-jd2l4" podStartSLOduration=99.221314063 podStartE2EDuration="1m39.221314063s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:21.220792189 +0000 UTC m=+119.742536987" watchObservedRunningTime="2025-12-09 14:16:21.221314063 +0000 UTC m=+119.743058861" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.244446 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.245520 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.745505296 +0000 UTC m=+120.267250084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.249614 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fbkc6" podStartSLOduration=8.249599335 podStartE2EDuration="8.249599335s" podCreationTimestamp="2025-12-09 14:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:21.249069341 +0000 UTC m=+119.770814149" watchObservedRunningTime="2025-12-09 14:16:21.249599335 +0000 UTC m=+119.771344133" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.266148 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:21 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:21 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:21 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.266201 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.311896 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" podStartSLOduration=99.311874522 podStartE2EDuration="1m39.311874522s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:21.309597462 +0000 UTC m=+119.831342260" watchObservedRunningTime="2025-12-09 14:16:21.311874522 +0000 UTC m=+119.833619320" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.347631 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.347907 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.847891061 +0000 UTC m=+120.369635859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.448765 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.449456 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:21.949437353 +0000 UTC m=+120.471182151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.550823 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.551359 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.051339154 +0000 UTC m=+120.573083952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.653535 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.653994 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.153974116 +0000 UTC m=+120.675718914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.757665 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.758487 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.258448196 +0000 UTC m=+120.780192994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.859787 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.860175 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.360162562 +0000 UTC m=+120.881907350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.937212 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4s642"] Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.946466 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.948710 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4s642"] Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.949447 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.961197 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.961362 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.461326334 +0000 UTC m=+120.983071132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:21 crc kubenswrapper[5116]: I1209 14:16:21.961719 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:21 crc kubenswrapper[5116]: E1209 14:16:21.962128 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.462118655 +0000 UTC m=+120.983863453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.062812 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.067060 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.56302041 +0000 UTC m=+121.084765208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.069847 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-catalog-content\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.069916 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssdv8\" (UniqueName: \"kubernetes.io/projected/1b43fdb9-c388-42c6-90d8-1fb5de88023a-kube-api-access-ssdv8\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.069997 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.070390 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-utilities\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.070637 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.570618902 +0000 UTC m=+121.092363700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.146055 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4ppp5"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.155817 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-wq89m" event={"ID":"4825a732-6888-45c1-845a-07b125e37de7","Type":"ContainerStarted","Data":"b1c77353239e9a4071ea8af8db7df781ac6ab1e15ed45eab875afc053b2229e1"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.155990 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.159773 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.160295 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" event={"ID":"a423b571-b9cd-4400-b95c-4d2f6073413e","Type":"ContainerStarted","Data":"ecb45c4905835359e34546edc6458c04c35997ed74b85d17445043480fc01887"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.163437 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.167440 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.167492 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4ppp5"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.168030 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.172667 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.173857 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.673234093 +0000 UTC m=+121.194978891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.174258 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-catalog-content\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.174299 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdv8\" (UniqueName: \"kubernetes.io/projected/1b43fdb9-c388-42c6-90d8-1fb5de88023a-kube-api-access-ssdv8\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.174337 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.174476 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-utilities\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.174917 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-utilities\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.175227 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-catalog-content\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.175444 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.675425701 +0000 UTC m=+121.197170499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.203786 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" event={"ID":"b0adc6af-a11b-4bad-83bc-e1be5945c05f","Type":"ContainerStarted","Data":"299a8d73f3024b78b0d0e0a996b3d4d7a09a9af400a68e323dd0dc28b20c2541"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.204021 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" event={"ID":"b0adc6af-a11b-4bad-83bc-e1be5945c05f","Type":"ContainerStarted","Data":"7fd52ad0d5915bd50d6ffc110f2ead1b77a6553391754819fe700876209679bf"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.210685 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdv8\" (UniqueName: \"kubernetes.io/projected/1b43fdb9-c388-42c6-90d8-1fb5de88023a-kube-api-access-ssdv8\") pod \"community-operators-4s642\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.211051 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sqftg" podStartSLOduration=100.211038689 podStartE2EDuration="1m40.211038689s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:22.210897995 +0000 UTC m=+120.732642793" watchObservedRunningTime="2025-12-09 14:16:22.211038689 +0000 UTC m=+120.732783487" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.219337 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" event={"ID":"8699f161-ee9b-4da7-9074-6af031d37b61","Type":"ContainerStarted","Data":"4d87498c46d00d873524a06ca4695c9fdc1597e0838468eddeb793011db86d26"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.245062 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fbkc6" event={"ID":"27abbd50-923e-4150-8073-95a6df5ff47e","Type":"ContainerStarted","Data":"bed0302b6220c2158002c575da72116a04c094e19ef4a3898440097151e86365"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.263676 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:22 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:22 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:22 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.263734 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.264558 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" event={"ID":"bfd6a6a0-01f5-4706-a512-6ea9ed6b10ae","Type":"ContainerStarted","Data":"7db5ff7fdcaed43a2b7455171526e5d968d317b4ba11f3be000ac3cf841bf8e9"} Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.274303 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.275846 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.276098 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-utilities\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.276233 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzc2\" (UniqueName: \"kubernetes.io/projected/27215642-7324-4959-8b89-554060ecec24-kube-api-access-4xzc2\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.276293 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-catalog-content\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.276379 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.776364217 +0000 UTC m=+121.298109015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.298434 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=31.298414324 podStartE2EDuration="31.298414324s" podCreationTimestamp="2025-12-09 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:22.245903037 +0000 UTC m=+120.767647835" watchObservedRunningTime="2025-12-09 14:16:22.298414324 +0000 UTC m=+120.820159122" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.304230 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" podStartSLOduration=99.304219368 podStartE2EDuration="1m39.304219368s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:22.29640504 +0000 UTC m=+120.818149838" watchObservedRunningTime="2025-12-09 14:16:22.304219368 +0000 UTC m=+120.825964186" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.312127 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.354444 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l4btj"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.359423 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-nqzwb" podStartSLOduration=100.359406627 podStartE2EDuration="1m40.359406627s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:22.354319631 +0000 UTC m=+120.876064429" watchObservedRunningTime="2025-12-09 14:16:22.359406627 +0000 UTC m=+120.881151425" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.368069 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4btj"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.368225 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.382555 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.382762 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-utilities\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.383337 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzc2\" (UniqueName: \"kubernetes.io/projected/27215642-7324-4959-8b89-554060ecec24-kube-api-access-4xzc2\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.383564 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-catalog-content\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.394802 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" podStartSLOduration=99.394789098 podStartE2EDuration="1m39.394789098s" podCreationTimestamp="2025-12-09 14:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:22.393445203 +0000 UTC m=+120.915190001" watchObservedRunningTime="2025-12-09 14:16:22.394789098 +0000 UTC m=+120.916533896" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.396611 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-utilities\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.396938 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-catalog-content\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.442522 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.942502318 +0000 UTC m=+121.464247106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.484876 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.485086 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.985063931 +0000 UTC m=+121.506808729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.485497 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-utilities\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.485860 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-catalog-content\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.485896 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.486044 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4ql\" (UniqueName: \"kubernetes.io/projected/82099c38-ac0a-4c91-ad38-43c94ec739c1-kube-api-access-jq4ql\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.486444 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:22.986431927 +0000 UTC m=+121.508176725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.494197 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.525946 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzc2\" (UniqueName: \"kubernetes.io/projected/27215642-7324-4959-8b89-554060ecec24-kube-api-access-4xzc2\") pod \"certified-operators-4ppp5\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.587458 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.587674 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.08764197 +0000 UTC m=+121.609386768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.587797 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4ql\" (UniqueName: \"kubernetes.io/projected/82099c38-ac0a-4c91-ad38-43c94ec739c1-kube-api-access-jq4ql\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.588010 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-utilities\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.588180 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-catalog-content\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.588207 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.588608 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.088599776 +0000 UTC m=+121.610344574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.589367 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-utilities\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.589648 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-catalog-content\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.612639 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4ql\" (UniqueName: \"kubernetes.io/projected/82099c38-ac0a-4c91-ad38-43c94ec739c1-kube-api-access-jq4ql\") pod \"community-operators-l4btj\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.689766 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.690234 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.190205538 +0000 UTC m=+121.711950336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.773280 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.774099 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.791889 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.792305 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.292292195 +0000 UTC m=+121.814036993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: W1209 14:16:22.826095 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b43fdb9_c388_42c6_90d8_1fb5de88023a.slice/crio-4ce9fd65ee6a6df8ba2c0547966671f66e242e6e786843e6b6373b2a0dee34f4 WatchSource:0}: Error finding container 4ce9fd65ee6a6df8ba2c0547966671f66e242e6e786843e6b6373b2a0dee34f4: Status 404 returned error can't find the container with id 4ce9fd65ee6a6df8ba2c0547966671f66e242e6e786843e6b6373b2a0dee34f4 Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.856128 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.856203 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnkj6"] Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.856347 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.861647 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.861759 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.893503 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.893716 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.893753 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.893942 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.393927659 +0000 UTC m=+121.915672457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.995973 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.996026 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.996068 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:22 crc kubenswrapper[5116]: E1209 14:16:22.996543 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.496527839 +0000 UTC m=+122.018272637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:22 crc kubenswrapper[5116]: I1209 14:16:22.997011 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.008664 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnkj6"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.008699 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4s642"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.008827 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.025715 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.088056 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4ppp5"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.098456 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.098759 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.598698418 +0000 UTC m=+122.120443216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.098913 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-utilities\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.099088 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.099904 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-catalog-content\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.099929 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv626\" (UniqueName: \"kubernetes.io/projected/a3101038-e5b7-44e7-9b29-61e6976d7da0-kube-api-access-bv626\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.101120 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.601107772 +0000 UTC m=+122.122852570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: W1209 14:16:23.111344 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27215642_7324_4959_8b89_554060ecec24.slice/crio-652b8c149edd63eca74851a986956f92518e98183a36b1a21e884f8fb19f194b WatchSource:0}: Error finding container 652b8c149edd63eca74851a986956f92518e98183a36b1a21e884f8fb19f194b: Status 404 returned error can't find the container with id 652b8c149edd63eca74851a986956f92518e98183a36b1a21e884f8fb19f194b Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.178146 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.201533 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.201736 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-catalog-content\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.201760 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bv626\" (UniqueName: \"kubernetes.io/projected/a3101038-e5b7-44e7-9b29-61e6976d7da0-kube-api-access-bv626\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.201799 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-utilities\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.201876 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.701846483 +0000 UTC m=+122.223591281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.202249 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-utilities\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.202287 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.202589 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-catalog-content\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.202687 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.702679555 +0000 UTC m=+122.224424353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.234320 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv626\" (UniqueName: \"kubernetes.io/projected/a3101038-e5b7-44e7-9b29-61e6976d7da0-kube-api-access-bv626\") pod \"certified-operators-tnkj6\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.263177 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:23 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:23 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:23 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.263249 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.268675 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerStarted","Data":"652b8c149edd63eca74851a986956f92518e98183a36b1a21e884f8fb19f194b"} Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.276083 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerStarted","Data":"2db21f732747fd25af8f55e3d4eee787fcb49eb1ac34c5b3da3852889a2e7480"} Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.276119 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerStarted","Data":"4ce9fd65ee6a6df8ba2c0547966671f66e242e6e786843e6b6373b2a0dee34f4"} Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.279481 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" gracePeriod=30 Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.304220 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.304658 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.804634918 +0000 UTC m=+122.326379716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.329198 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.387850 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4btj"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.407425 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.407701 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:23.90768877 +0000 UTC m=+122.429433568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.509518 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.509621 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.009599192 +0000 UTC m=+122.531343990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.510070 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.510420 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.010412634 +0000 UTC m=+122.532157432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.569053 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.614434 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.614671 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.614699 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.614721 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.614760 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.615929 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.115909811 +0000 UTC m=+122.637654609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.617119 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.617291 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.617423 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.634278 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.634458 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.637985 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.650478 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.651011 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.668539 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.696145 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnkj6"] Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.719360 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.219347873 +0000 UTC m=+122.741092681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.719084 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.792109 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50634: no serving certificate available for the kubelet" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.836670 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.837017 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.837306 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.337286312 +0000 UTC m=+122.859031110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.839833 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.863448 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51843597-ba2b-4059-aa79-13887c6100f2-metrics-certs\") pod \"network-metrics-daemon-pmt9f\" (UID: \"51843597-ba2b-4059-aa79-13887c6100f2\") " pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.872293 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.884078 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.884393 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.889515 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmt9f" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.945250 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:23 crc kubenswrapper[5116]: E1209 14:16:23.946287 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.446270432 +0000 UTC m=+122.968015230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.957047 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6wc9"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.974009 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6wc9"] Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.974147 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:23 crc kubenswrapper[5116]: I1209 14:16:23.980325 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.046522 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.046745 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.546717315 +0000 UTC m=+123.068462113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.047063 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8vx\" (UniqueName: \"kubernetes.io/projected/57ce2822-5420-457b-b3dc-1314fccf7d63-kube-api-access-fv8vx\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.047153 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.047495 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.547482405 +0000 UTC m=+123.069227203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.047736 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-utilities\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.047851 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-catalog-content\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.151913 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.152062 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8vx\" (UniqueName: \"kubernetes.io/projected/57ce2822-5420-457b-b3dc-1314fccf7d63-kube-api-access-fv8vx\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.152100 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-utilities\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.152171 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-catalog-content\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.152668 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-catalog-content\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.152760 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.652745936 +0000 UTC m=+123.174490724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.153383 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-utilities\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.172803 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8vx\" (UniqueName: \"kubernetes.io/projected/57ce2822-5420-457b-b3dc-1314fccf7d63-kube-api-access-fv8vx\") pod \"redhat-marketplace-n6wc9\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.253465 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.253779 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.753765354 +0000 UTC m=+123.275510152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.258652 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:24 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:24 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:24 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.258715 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.293827 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pmt9f"] Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.297057 5116 generic.go:358] "Generic (PLEG): container finished" podID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerID="bc933b675c58492e5c58bdfe436010522501e121ce05bb48000427ede8265540" exitCode=0 Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.297119 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerDied","Data":"bc933b675c58492e5c58bdfe436010522501e121ce05bb48000427ede8265540"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.297177 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerStarted","Data":"dd32427c1d0aa0a0c4f821415c87c66425ec690af440b7fc41ad9bfaab8ebcba"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.298213 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56d17dc8-fa77-4745-9aaa-9e7f995cbf50","Type":"ContainerStarted","Data":"8aaaf623aee702c06729fce4a91a1917e3b857a53f8050677f1a2391bf540b84"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.299679 5116 generic.go:358] "Generic (PLEG): container finished" podID="27215642-7324-4959-8b89-554060ecec24" containerID="db63082743425e66779649f3667cb5e65e15fda9d60b198df880fd8d79a9782d" exitCode=0 Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.299844 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerDied","Data":"db63082743425e66779649f3667cb5e65e15fda9d60b198df880fd8d79a9782d"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.303460 5116 generic.go:358] "Generic (PLEG): container finished" podID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerID="2db21f732747fd25af8f55e3d4eee787fcb49eb1ac34c5b3da3852889a2e7480" exitCode=0 Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.303540 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerDied","Data":"2db21f732747fd25af8f55e3d4eee787fcb49eb1ac34c5b3da3852889a2e7480"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.305490 5116 generic.go:358] "Generic (PLEG): container finished" podID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerID="7acd6b8afa874324bc268b1fcde5eb081528c9305426836cc9c6fdd3e90400fc" exitCode=0 Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.305577 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerDied","Data":"7acd6b8afa874324bc268b1fcde5eb081528c9305426836cc9c6fdd3e90400fc"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.305618 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerStarted","Data":"c8bf8b84dc0d45b571c537123c1f9f7b630a19ce73d5ab4215d7511e2345b908"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.307610 5116 generic.go:358] "Generic (PLEG): container finished" podID="03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" containerID="a039818d60e65a5d6ce30d359b4d6cea9be3581af73e13d9efc1f36fb1c65f26" exitCode=0 Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.307690 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" event={"ID":"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9","Type":"ContainerDied","Data":"a039818d60e65a5d6ce30d359b4d6cea9be3581af73e13d9efc1f36fb1c65f26"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.317233 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.318797 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" event={"ID":"313d594f-05f3-4875-8cd4-1bab1042ba29","Type":"ContainerStarted","Data":"54f3a71b5bd21e7756386b15dbecc9ac00dc963169d044220354bcfc45a9cc2a"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.349291 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4pxr"] Dec 09 14:16:24 crc kubenswrapper[5116]: W1209 14:16:24.352675 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51843597_ba2b_4059_aa79_13887c6100f2.slice/crio-3801ec03495980e3181001d9b24487cf06a6b84897efe7d43cd5980b82d7fbc8 WatchSource:0}: Error finding container 3801ec03495980e3181001d9b24487cf06a6b84897efe7d43cd5980b82d7fbc8: Status 404 returned error can't find the container with id 3801ec03495980e3181001d9b24487cf06a6b84897efe7d43cd5980b82d7fbc8 Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.354122 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.354447 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.854428023 +0000 UTC m=+123.376172821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.357162 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4pxr"] Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.357196 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"ae3e938f515f4f9bbc661708d30b24260256373f0fb9ea810f3229898ab0ac8b"} Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.357315 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.457973 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-catalog-content\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.458373 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52p8f\" (UniqueName: \"kubernetes.io/projected/61857eb7-32cc-4317-bb20-d11f7e1c241d-kube-api-access-52p8f\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.458448 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-utilities\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.458674 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.459101 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:24.959083018 +0000 UTC m=+123.480827816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.559973 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.560112 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-catalog-content\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.560156 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52p8f\" (UniqueName: \"kubernetes.io/projected/61857eb7-32cc-4317-bb20-d11f7e1c241d-kube-api-access-52p8f\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.560183 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-utilities\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.560648 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-utilities\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.560735 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.060712752 +0000 UTC m=+123.582457550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.560999 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-catalog-content\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.581717 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52p8f\" (UniqueName: \"kubernetes.io/projected/61857eb7-32cc-4317-bb20-d11f7e1c241d-kube-api-access-52p8f\") pod \"redhat-marketplace-r4pxr\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.630305 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6wc9"] Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.661811 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.662123 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.16211095 +0000 UTC m=+123.683855748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.726827 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.762507 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.762718 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.262684926 +0000 UTC m=+123.784429724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.762913 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.763551 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.263538179 +0000 UTC m=+123.785282977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.866578 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.866722 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.366693224 +0000 UTC m=+123.888438022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.867028 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.867342 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.367328611 +0000 UTC m=+123.889073409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.945518 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4pxr"] Dec 09 14:16:24 crc kubenswrapper[5116]: I1209 14:16:24.968905 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:24 crc kubenswrapper[5116]: E1209 14:16:24.970150 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.470120176 +0000 UTC m=+123.991864974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:24 crc kubenswrapper[5116]: W1209 14:16:24.991763 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61857eb7_32cc_4317_bb20_d11f7e1c241d.slice/crio-1248b2a17b7da1e09f2006071254eea263d0dec0b7ef2c491c7eb690edc33435 WatchSource:0}: Error finding container 1248b2a17b7da1e09f2006071254eea263d0dec0b7ef2c491c7eb690edc33435: Status 404 returned error can't find the container with id 1248b2a17b7da1e09f2006071254eea263d0dec0b7ef2c491c7eb690edc33435 Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.070822 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.570805015 +0000 UTC m=+124.092549813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.071114 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.123379 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7tzzc"] Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.130390 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.133745 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.137807 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tzzc"] Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.172336 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.172515 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.67246791 +0000 UTC m=+124.194212868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.173205 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.173542 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.673524119 +0000 UTC m=+124.195268917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.262574 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:25 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:25 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:25 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.263095 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.275010 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.275185 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.775153393 +0000 UTC m=+124.296898181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.275624 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-catalog-content\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.275749 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.275842 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-utilities\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.275879 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljbx\" (UniqueName: \"kubernetes.io/projected/25eb2cca-64e7-416b-9247-5548bf7a0eb4-kube-api-access-wljbx\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.276351 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.776332864 +0000 UTC m=+124.298077662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.372929 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"edb3a89b78340a5cae669b19f8f0046c60eb6ad4a5ddfd80076dcad99f0fbc05"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.378115 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.378349 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-utilities\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.378391 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wljbx\" (UniqueName: \"kubernetes.io/projected/25eb2cca-64e7-416b-9247-5548bf7a0eb4-kube-api-access-wljbx\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.378460 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.878428051 +0000 UTC m=+124.400172959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.378639 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-catalog-content\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.379327 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-utilities\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.379983 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-catalog-content\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.386121 5116 generic.go:358] "Generic (PLEG): container finished" podID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerID="a04fbf82845d8ad2cf74366ce5ded8edaa1d1901dcbce5ff1f439a4fe5569651" exitCode=0 Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.386233 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerDied","Data":"a04fbf82845d8ad2cf74366ce5ded8edaa1d1901dcbce5ff1f439a4fe5569651"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.386278 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerStarted","Data":"f988029123d39fffce9bb13d7e23741621f4fc7c5ec3d93dee6f757f76e3c0ca"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.395238 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"007333c89e1d68d4cae5bb7a0726934ea2d993c8a1a87ab3159dfc6cca5dca1d"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.395296 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"8562917b866f45924e2c97640af58fe4e9607395ccf0a1172995d07858414b13"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.403793 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljbx\" (UniqueName: \"kubernetes.io/projected/25eb2cca-64e7-416b-9247-5548bf7a0eb4-kube-api-access-wljbx\") pod \"redhat-operators-7tzzc\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.404822 5116 generic.go:358] "Generic (PLEG): container finished" podID="56d17dc8-fa77-4745-9aaa-9e7f995cbf50" containerID="99b9ccc1e8139ad31580576971e98178c15bc548955300e32ae818e595575c02" exitCode=0 Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.405004 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56d17dc8-fa77-4745-9aaa-9e7f995cbf50","Type":"ContainerDied","Data":"99b9ccc1e8139ad31580576971e98178c15bc548955300e32ae818e595575c02"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.425884 5116 generic.go:358] "Generic (PLEG): container finished" podID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerID="55be913e63bfc60f7d39a6ddd80745d6c0dc670676b65293df568b3d6c995b84" exitCode=0 Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.426078 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerDied","Data":"55be913e63bfc60f7d39a6ddd80745d6c0dc670676b65293df568b3d6c995b84"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.426107 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerStarted","Data":"1248b2a17b7da1e09f2006071254eea263d0dec0b7ef2c491c7eb690edc33435"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.437282 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" event={"ID":"51843597-ba2b-4059-aa79-13887c6100f2","Type":"ContainerStarted","Data":"8da637649bb15c20e57a970ff74359d616f66b01d4bc5f2f6918e5a44d7d6aa8"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.437348 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" event={"ID":"51843597-ba2b-4059-aa79-13887c6100f2","Type":"ContainerStarted","Data":"9f3c55fce84b842250e75bfd0dffb9ffc5712cbd11898299fd1e58c07ed5019d"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.437364 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pmt9f" event={"ID":"51843597-ba2b-4059-aa79-13887c6100f2","Type":"ContainerStarted","Data":"3801ec03495980e3181001d9b24487cf06a6b84897efe7d43cd5980b82d7fbc8"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.452120 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.462532 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"d158718a2b69f3bca588754d6fc33da26003f33592e7a3664e7726a1030cd79a"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.462574 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"20d4acbfaa45e46365ef21820462e6d6ed4816175d9d5ff81aa78b24e8e6205f"} Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.463104 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.484836 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.485731 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:25.985703076 +0000 UTC m=+124.507447874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.531933 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pmt9f" podStartSLOduration=103.531900635 podStartE2EDuration="1m43.531900635s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:25.498092995 +0000 UTC m=+124.019837793" watchObservedRunningTime="2025-12-09 14:16:25.531900635 +0000 UTC m=+124.053645443" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.541879 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bm9z9"] Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.566217 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.570209 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bm9z9"] Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.592531 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.593329 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.093313909 +0000 UTC m=+124.615058707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.694062 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.694484 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-utilities\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.694496 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.194476841 +0000 UTC m=+124.716221689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.694667 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-catalog-content\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.694699 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjlc\" (UniqueName: \"kubernetes.io/projected/a4999e45-13d9-4e82-ab56-f1022ef697a9-kube-api-access-5cjlc\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.804694 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.805123 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-catalog-content\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.805167 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjlc\" (UniqueName: \"kubernetes.io/projected/a4999e45-13d9-4e82-ab56-f1022ef697a9-kube-api-access-5cjlc\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.805261 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-utilities\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.805676 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.305649779 +0000 UTC m=+124.827394577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.806057 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-utilities\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.806323 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-catalog-content\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.814715 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-tx992 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.814779 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-tx992" podUID="22f707dd-e3f1-40e4-bc80-72d0d0ccf8ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.881751 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.889343 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjlc\" (UniqueName: \"kubernetes.io/projected/a4999e45-13d9-4e82-ab56-f1022ef697a9-kube-api-access-5cjlc\") pod \"redhat-operators-bm9z9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.907976 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.908829 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.908891 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.908911 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:25 crc kubenswrapper[5116]: E1209 14:16:25.909370 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.409352079 +0000 UTC m=+124.931096877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.911759 5116 patch_prober.go:28] interesting pod/console-64d44f6ddf-8sdgn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.911811 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-8sdgn" podUID="f3706088-2315-4b35-852b-1327e8a99d18" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.938627 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.938700 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.955159 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tzzc"] Dec 09 14:16:25 crc kubenswrapper[5116]: I1209 14:16:25.963675 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.010397 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2crzr\" (UniqueName: \"kubernetes.io/projected/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-kube-api-access-2crzr\") pod \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.010736 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-secret-volume\") pod \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.010842 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.010890 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume\") pod \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\" (UID: \"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9\") " Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.011250 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.511216519 +0000 UTC m=+125.032961317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.012269 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume" (OuterVolumeSpecName: "config-volume") pod "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" (UID: "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.030233 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-kube-api-access-2crzr" (OuterVolumeSpecName: "kube-api-access-2crzr") pod "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" (UID: "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9"). InnerVolumeSpecName "kube-api-access-2crzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.035104 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" (UID: "03bd8401-ab8b-4d8c-a1a8-d9341a7becf9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.113300 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.134785 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2crzr\" (UniqueName: \"kubernetes.io/projected/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-kube-api-access-2crzr\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.134815 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.134824 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bd8401-ab8b-4d8c-a1a8-d9341a7becf9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.134860 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.634844499 +0000 UTC m=+125.156589297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.238473 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.238814 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.738797654 +0000 UTC m=+125.260542452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.259993 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:26 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:26 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:26 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.260071 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.261039 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.286463 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.286510 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.340257 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.342539 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.842527254 +0000 UTC m=+125.364272052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.344425 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.412917 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bm9z9"] Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.443690 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.444057 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:26.944038466 +0000 UTC m=+125.465783264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.449192 5116 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.486885 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.486910 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-49k9q" event={"ID":"03bd8401-ab8b-4d8c-a1a8-d9341a7becf9","Type":"ContainerDied","Data":"8bce96b288415ccdd8dc7f9685f5156c72e7b91cc844c18db212b78c1e309f16"} Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.486998 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bce96b288415ccdd8dc7f9685f5156c72e7b91cc844c18db212b78c1e309f16" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.500247 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" event={"ID":"313d594f-05f3-4875-8cd4-1bab1042ba29","Type":"ContainerStarted","Data":"912544f94bb23de40b810d701b44bf7edf18da6b1e31dbcf94892e68cbc6c833"} Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.504354 5116 generic.go:358] "Generic (PLEG): container finished" podID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerID="b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34" exitCode=0 Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.504474 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerDied","Data":"b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34"} Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.504505 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerStarted","Data":"9ebad28baf20338d188dfa28bfcdd6c122ecfc3402acebb51ac8a41c162b6a47"} Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.510259 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerStarted","Data":"be0408977a2e87cc3434a0747285071030f01d5c56272d0272124eb660ec9401"} Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.519288 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mz9vv" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.519728 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-5rkz7" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.546597 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.547780 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:27.047701844 +0000 UTC m=+125.569446632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.650372 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.650703 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:27.150687484 +0000 UTC m=+125.672432282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.751984 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.752479 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:27.252454832 +0000 UTC m=+125.774199630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.854714 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.855008 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:16:27.35494568 +0000 UTC m=+125.876690598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.855149 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:26 crc kubenswrapper[5116]: E1209 14:16:26.856122 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:16:27.356113171 +0000 UTC m=+125.877857969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-gsv2s" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.885260 5116 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T14:16:26.449252424Z","UUID":"dd0866db-d5bb-4073-a00a-f557be99a66e","Handler":null,"Name":"","Endpoint":""} Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.889680 5116 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.889781 5116 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.940384 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.957733 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.957869 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kube-api-access\") pod \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.957937 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kubelet-dir\") pod \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\" (UID: \"56d17dc8-fa77-4745-9aaa-9e7f995cbf50\") " Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.958311 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56d17dc8-fa77-4745-9aaa-9e7f995cbf50" (UID: "56d17dc8-fa77-4745-9aaa-9e7f995cbf50"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.977983 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56d17dc8-fa77-4745-9aaa-9e7f995cbf50" (UID: "56d17dc8-fa77-4745-9aaa-9e7f995cbf50"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:26 crc kubenswrapper[5116]: I1209 14:16:26.982382 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.061062 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.061163 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.061174 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17dc8-fa77-4745-9aaa-9e7f995cbf50-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.063931 5116 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.063976 5116 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.281005 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:27 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:27 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:27 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.281474 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.287599 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-gsv2s\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.406299 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.417162 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.530159 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56d17dc8-fa77-4745-9aaa-9e7f995cbf50","Type":"ContainerDied","Data":"8aaaf623aee702c06729fce4a91a1917e3b857a53f8050677f1a2391bf540b84"} Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.530197 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aaaf623aee702c06729fce4a91a1917e3b857a53f8050677f1a2391bf540b84" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.530308 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.558895 5116 generic.go:358] "Generic (PLEG): container finished" podID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerID="fd9464a74f23c476b72e25d719ff367ac7919eb3a1b961d1f6bfd7d98199d2f7" exitCode=0 Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.559257 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerDied","Data":"fd9464a74f23c476b72e25d719ff367ac7919eb3a1b961d1f6bfd7d98199d2f7"} Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.568186 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" event={"ID":"313d594f-05f3-4875-8cd4-1bab1042ba29","Type":"ContainerStarted","Data":"99a7313a820e67432f5f04ba25cd0e391f82cb0827b94dc262a79a0f16dc8b91"} Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.758756 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.879110 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-gsv2s"] Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.963048 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.964038 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" containerName="collect-profiles" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.964060 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" containerName="collect-profiles" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.964075 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56d17dc8-fa77-4745-9aaa-9e7f995cbf50" containerName="pruner" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.964082 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d17dc8-fa77-4745-9aaa-9e7f995cbf50" containerName="pruner" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.964224 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="56d17dc8-fa77-4745-9aaa-9e7f995cbf50" containerName="pruner" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.964239 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="03bd8401-ab8b-4d8c-a1a8-d9341a7becf9" containerName="collect-profiles" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.970846 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.971067 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.980124 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:16:27 crc kubenswrapper[5116]: I1209 14:16:27.980885 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.087552 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daecaa85-80e4-48f0-b423-85dd42d5801e-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.088523 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daecaa85-80e4-48f0-b423-85dd42d5801e-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.189360 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daecaa85-80e4-48f0-b423-85dd42d5801e-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.189443 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daecaa85-80e4-48f0-b423-85dd42d5801e-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.189751 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daecaa85-80e4-48f0-b423-85dd42d5801e-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.217970 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daecaa85-80e4-48f0-b423-85dd42d5801e-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.259166 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xd87d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:16:28 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 09 14:16:28 crc kubenswrapper[5116]: [+]process-running ok Dec 09 14:16:28 crc kubenswrapper[5116]: healthz check failed Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.259225 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" podUID="327bd87a-2375-4b04-b49f-173966bba4fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.296281 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.575450 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" event={"ID":"313d594f-05f3-4875-8cd4-1bab1042ba29","Type":"ContainerStarted","Data":"cb62cdedc78f89d4055973aeacd5cb11b97dc241cfa947e5568eee26d3e3f060"} Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.580141 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" event={"ID":"21d797a0-87da-4b3e-8322-c8f27f9bb2d4","Type":"ContainerStarted","Data":"25c4a44165183ffd747cda1b6ad586d77399e9b686438b257d71b29caf4a054b"} Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.580180 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" event={"ID":"21d797a0-87da-4b3e-8322-c8f27f9bb2d4","Type":"ContainerStarted","Data":"f3a9814b83eb60079abff009b32f25e526dec863a0b3a1d4c69af18835238559"} Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.682284 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2gk88" podStartSLOduration=15.682262974 podStartE2EDuration="15.682262974s" podCreationTimestamp="2025-12-09 14:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:28.600058496 +0000 UTC m=+127.121803304" watchObservedRunningTime="2025-12-09 14:16:28.682262974 +0000 UTC m=+127.204007772" Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.682565 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:16:28 crc kubenswrapper[5116]: I1209 14:16:28.952468 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50642: no serving certificate available for the kubelet" Dec 09 14:16:29 crc kubenswrapper[5116]: I1209 14:16:29.260533 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:29 crc kubenswrapper[5116]: I1209 14:16:29.285611 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fbkc6" Dec 09 14:16:29 crc kubenswrapper[5116]: I1209 14:16:29.433876 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-xd87d" Dec 09 14:16:29 crc kubenswrapper[5116]: I1209 14:16:29.621223 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"daecaa85-80e4-48f0-b423-85dd42d5801e","Type":"ContainerStarted","Data":"8adb35224f50168b19b3a51aecd6f53d18138300b2a8ffafa43220fbca23fc1b"} Dec 09 14:16:29 crc kubenswrapper[5116]: I1209 14:16:29.622068 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:29 crc kubenswrapper[5116]: I1209 14:16:29.694202 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" podStartSLOduration=107.694180841 podStartE2EDuration="1m47.694180841s" podCreationTimestamp="2025-12-09 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:29.686081525 +0000 UTC m=+128.207826343" watchObservedRunningTime="2025-12-09 14:16:29.694180841 +0000 UTC m=+128.215925639" Dec 09 14:16:30 crc kubenswrapper[5116]: E1209 14:16:30.483827 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:30 crc kubenswrapper[5116]: E1209 14:16:30.486930 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:30 crc kubenswrapper[5116]: E1209 14:16:30.488943 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:30 crc kubenswrapper[5116]: E1209 14:16:30.489019 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:16:30 crc kubenswrapper[5116]: I1209 14:16:30.629364 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"daecaa85-80e4-48f0-b423-85dd42d5801e","Type":"ContainerStarted","Data":"131fca46dc11fad8c509d52ed4255c05def55ac64c37b7ca5fd59fffeaed0d56"} Dec 09 14:16:30 crc kubenswrapper[5116]: I1209 14:16:30.647366 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.647350503 podStartE2EDuration="3.647350503s" podCreationTimestamp="2025-12-09 14:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:30.645402061 +0000 UTC m=+129.167146859" watchObservedRunningTime="2025-12-09 14:16:30.647350503 +0000 UTC m=+129.169095321" Dec 09 14:16:31 crc kubenswrapper[5116]: I1209 14:16:31.120541 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-tx992" Dec 09 14:16:31 crc kubenswrapper[5116]: I1209 14:16:31.640572 5116 generic.go:358] "Generic (PLEG): container finished" podID="daecaa85-80e4-48f0-b423-85dd42d5801e" containerID="131fca46dc11fad8c509d52ed4255c05def55ac64c37b7ca5fd59fffeaed0d56" exitCode=0 Dec 09 14:16:31 crc kubenswrapper[5116]: I1209 14:16:31.640731 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"daecaa85-80e4-48f0-b423-85dd42d5801e","Type":"ContainerDied","Data":"131fca46dc11fad8c509d52ed4255c05def55ac64c37b7ca5fd59fffeaed0d56"} Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.285155 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.331882 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.371383 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daecaa85-80e4-48f0-b423-85dd42d5801e-kube-api-access\") pod \"daecaa85-80e4-48f0-b423-85dd42d5801e\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.371476 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daecaa85-80e4-48f0-b423-85dd42d5801e-kubelet-dir\") pod \"daecaa85-80e4-48f0-b423-85dd42d5801e\" (UID: \"daecaa85-80e4-48f0-b423-85dd42d5801e\") " Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.371802 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/daecaa85-80e4-48f0-b423-85dd42d5801e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "daecaa85-80e4-48f0-b423-85dd42d5801e" (UID: "daecaa85-80e4-48f0-b423-85dd42d5801e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.406353 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daecaa85-80e4-48f0-b423-85dd42d5801e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "daecaa85-80e4-48f0-b423-85dd42d5801e" (UID: "daecaa85-80e4-48f0-b423-85dd42d5801e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.472657 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daecaa85-80e4-48f0-b423-85dd42d5801e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.472701 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daecaa85-80e4-48f0-b423-85dd42d5801e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.651338 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"daecaa85-80e4-48f0-b423-85dd42d5801e","Type":"ContainerDied","Data":"8adb35224f50168b19b3a51aecd6f53d18138300b2a8ffafa43220fbca23fc1b"} Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.651390 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8adb35224f50168b19b3a51aecd6f53d18138300b2a8ffafa43220fbca23fc1b" Dec 09 14:16:33 crc kubenswrapper[5116]: I1209 14:16:33.651404 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:16:34 crc kubenswrapper[5116]: I1209 14:16:34.290917 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:16:35 crc kubenswrapper[5116]: I1209 14:16:35.910065 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:35 crc kubenswrapper[5116]: I1209 14:16:35.915741 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-8sdgn" Dec 09 14:16:39 crc kubenswrapper[5116]: I1209 14:16:39.239898 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54608: no serving certificate available for the kubelet" Dec 09 14:16:40 crc kubenswrapper[5116]: E1209 14:16:40.483748 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:40 crc kubenswrapper[5116]: E1209 14:16:40.485846 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:40 crc kubenswrapper[5116]: E1209 14:16:40.487599 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:40 crc kubenswrapper[5116]: E1209 14:16:40.487656 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.733316 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerStarted","Data":"c226fa9114e29cd897a8b924273e32af5e01a0f717f26faeffcc0399572378ea"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.735166 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerStarted","Data":"2c44ffb6f54ee7d102327bf12475d142a4999298c180cecf6dbb14b924f18a47"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.736435 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerStarted","Data":"2a89185b5ffa5373730d6077d30be3063f916a30e5444a4454be9b4d95c713af"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.738527 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerStarted","Data":"9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.739961 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerStarted","Data":"c0a7cf3f54063ad06cab78a3512bd53901754757613fa756d3316b0647a51412"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.741685 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerStarted","Data":"153bc1f7bf8d673a9f723dfd9e6ae78eb0c678a4f56f4945c46ce4feb5705673"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.743149 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerStarted","Data":"cd3ae085551423b66d6551ebb262434c051bb5c95a1ea98e280786a4c27105f1"} Dec 09 14:16:45 crc kubenswrapper[5116]: I1209 14:16:45.745836 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerStarted","Data":"31ced6558c2265f64244783a233c27ff50754758a3cb2f94c0954e0eef793eb9"} Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.756760 5116 generic.go:358] "Generic (PLEG): container finished" podID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerID="31ced6558c2265f64244783a233c27ff50754758a3cb2f94c0954e0eef793eb9" exitCode=0 Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.756976 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerDied","Data":"31ced6558c2265f64244783a233c27ff50754758a3cb2f94c0954e0eef793eb9"} Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.764214 5116 generic.go:358] "Generic (PLEG): container finished" podID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerID="c226fa9114e29cd897a8b924273e32af5e01a0f717f26faeffcc0399572378ea" exitCode=0 Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.764362 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerDied","Data":"c226fa9114e29cd897a8b924273e32af5e01a0f717f26faeffcc0399572378ea"} Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.768944 5116 generic.go:358] "Generic (PLEG): container finished" podID="27215642-7324-4959-8b89-554060ecec24" containerID="2c44ffb6f54ee7d102327bf12475d142a4999298c180cecf6dbb14b924f18a47" exitCode=0 Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.769115 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerDied","Data":"2c44ffb6f54ee7d102327bf12475d142a4999298c180cecf6dbb14b924f18a47"} Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.774792 5116 generic.go:358] "Generic (PLEG): container finished" podID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerID="153bc1f7bf8d673a9f723dfd9e6ae78eb0c678a4f56f4945c46ce4feb5705673" exitCode=0 Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.775056 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerDied","Data":"153bc1f7bf8d673a9f723dfd9e6ae78eb0c678a4f56f4945c46ce4feb5705673"} Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.779683 5116 generic.go:358] "Generic (PLEG): container finished" podID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerID="cd3ae085551423b66d6551ebb262434c051bb5c95a1ea98e280786a4c27105f1" exitCode=0 Dec 09 14:16:46 crc kubenswrapper[5116]: I1209 14:16:46.781224 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerDied","Data":"cd3ae085551423b66d6551ebb262434c051bb5c95a1ea98e280786a4c27105f1"} Dec 09 14:16:47 crc kubenswrapper[5116]: I1209 14:16:47.814241 5116 generic.go:358] "Generic (PLEG): container finished" podID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerID="2a89185b5ffa5373730d6077d30be3063f916a30e5444a4454be9b4d95c713af" exitCode=0 Dec 09 14:16:47 crc kubenswrapper[5116]: I1209 14:16:47.814303 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerDied","Data":"2a89185b5ffa5373730d6077d30be3063f916a30e5444a4454be9b4d95c713af"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.823708 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerStarted","Data":"af5b75e03b2a952db073b2c463e8db4e520442c192c46ca4345cca18d1256701"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.826148 5116 generic.go:358] "Generic (PLEG): container finished" podID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerID="9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8" exitCode=0 Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.826219 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerDied","Data":"9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.828674 5116 generic.go:358] "Generic (PLEG): container finished" podID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerID="c0a7cf3f54063ad06cab78a3512bd53901754757613fa756d3316b0647a51412" exitCode=0 Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.828779 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerDied","Data":"c0a7cf3f54063ad06cab78a3512bd53901754757613fa756d3316b0647a51412"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.832063 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerStarted","Data":"de0ce26a520ddf8d12bbfc3a59e58f6e81f901bc46b2e9db85ff5eda339ed69a"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.834960 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerStarted","Data":"8c03ce8b55ca4001c3aa944a4c262558c0aa8a775f86787353296af345b85ca4"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.837538 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerStarted","Data":"8e25dc8b8bdea841b2de09c095a18270b71011a7a8f78394408c22fbebc5b31e"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.839440 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerStarted","Data":"23965169f8ecc80f1ceca3dc62e1817dfd5cb2659d406f80de3ac1e9c7423859"} Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.847229 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4ppp5" podStartSLOduration=5.8677317890000005 podStartE2EDuration="26.84720531s" podCreationTimestamp="2025-12-09 14:16:22 +0000 UTC" firstStartedPulling="2025-12-09 14:16:24.302147272 +0000 UTC m=+122.823892070" lastFinishedPulling="2025-12-09 14:16:45.281620783 +0000 UTC m=+143.803365591" observedRunningTime="2025-12-09 14:16:48.842106212 +0000 UTC m=+147.363851030" watchObservedRunningTime="2025-12-09 14:16:48.84720531 +0000 UTC m=+147.368950108" Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.862050 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnkj6" podStartSLOduration=5.8955003900000005 podStartE2EDuration="26.86203046s" podCreationTimestamp="2025-12-09 14:16:22 +0000 UTC" firstStartedPulling="2025-12-09 14:16:24.298293989 +0000 UTC m=+122.820038787" lastFinishedPulling="2025-12-09 14:16:45.264824059 +0000 UTC m=+143.786568857" observedRunningTime="2025-12-09 14:16:48.857776985 +0000 UTC m=+147.379521793" watchObservedRunningTime="2025-12-09 14:16:48.86203046 +0000 UTC m=+147.383775258" Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.890784 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6wc9" podStartSLOduration=6.016923914 podStartE2EDuration="25.890707345s" podCreationTimestamp="2025-12-09 14:16:23 +0000 UTC" firstStartedPulling="2025-12-09 14:16:25.387367119 +0000 UTC m=+123.909111917" lastFinishedPulling="2025-12-09 14:16:45.26115055 +0000 UTC m=+143.782895348" observedRunningTime="2025-12-09 14:16:48.888331061 +0000 UTC m=+147.410075889" watchObservedRunningTime="2025-12-09 14:16:48.890707345 +0000 UTC m=+147.412452163" Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.929854 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4pxr" podStartSLOduration=5.091607696 podStartE2EDuration="24.929832432s" podCreationTimestamp="2025-12-09 14:16:24 +0000 UTC" firstStartedPulling="2025-12-09 14:16:25.426946122 +0000 UTC m=+123.948690920" lastFinishedPulling="2025-12-09 14:16:45.265170848 +0000 UTC m=+143.786915656" observedRunningTime="2025-12-09 14:16:48.927404326 +0000 UTC m=+147.449149124" watchObservedRunningTime="2025-12-09 14:16:48.929832432 +0000 UTC m=+147.451577230" Dec 09 14:16:48 crc kubenswrapper[5116]: I1209 14:16:48.954615 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l4btj" podStartSLOduration=5.976443994 podStartE2EDuration="26.954598491s" podCreationTimestamp="2025-12-09 14:16:22 +0000 UTC" firstStartedPulling="2025-12-09 14:16:24.30621421 +0000 UTC m=+122.827959008" lastFinishedPulling="2025-12-09 14:16:45.284368687 +0000 UTC m=+143.806113505" observedRunningTime="2025-12-09 14:16:48.951877677 +0000 UTC m=+147.473622495" watchObservedRunningTime="2025-12-09 14:16:48.954598491 +0000 UTC m=+147.476343289" Dec 09 14:16:49 crc kubenswrapper[5116]: I1209 14:16:49.845351 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerStarted","Data":"d9be77cb6c7b76fb77043c36fe092560241b2512a9cd608239b4bfd26724e006"} Dec 09 14:16:50 crc kubenswrapper[5116]: I1209 14:16:50.072041 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4s642" podStartSLOduration=8.08176144 podStartE2EDuration="29.072019343s" podCreationTimestamp="2025-12-09 14:16:21 +0000 UTC" firstStartedPulling="2025-12-09 14:16:24.304163085 +0000 UTC m=+122.825907883" lastFinishedPulling="2025-12-09 14:16:45.294420988 +0000 UTC m=+143.816165786" observedRunningTime="2025-12-09 14:16:50.068559759 +0000 UTC m=+148.590304557" watchObservedRunningTime="2025-12-09 14:16:50.072019343 +0000 UTC m=+148.593764151" Dec 09 14:16:50 crc kubenswrapper[5116]: E1209 14:16:50.486177 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:50 crc kubenswrapper[5116]: E1209 14:16:50.487935 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:50 crc kubenswrapper[5116]: E1209 14:16:50.489105 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:16:50 crc kubenswrapper[5116]: E1209 14:16:50.489143 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:16:50 crc kubenswrapper[5116]: I1209 14:16:50.635872 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:16:50 crc kubenswrapper[5116]: I1209 14:16:50.852070 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerStarted","Data":"96577c449c7dba93227eed6ac91511c750295d3b1b72d97f3912ccb3d81305a4"} Dec 09 14:16:51 crc kubenswrapper[5116]: I1209 14:16:51.859732 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerStarted","Data":"b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00"} Dec 09 14:16:51 crc kubenswrapper[5116]: I1209 14:16:51.883620 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bm9z9" podStartSLOduration=9.178937640000001 podStartE2EDuration="26.883599493s" podCreationTimestamp="2025-12-09 14:16:25 +0000 UTC" firstStartedPulling="2025-12-09 14:16:27.559844097 +0000 UTC m=+126.081588895" lastFinishedPulling="2025-12-09 14:16:45.26450595 +0000 UTC m=+143.786250748" observedRunningTime="2025-12-09 14:16:51.883002877 +0000 UTC m=+150.404747695" watchObservedRunningTime="2025-12-09 14:16:51.883599493 +0000 UTC m=+150.405344291" Dec 09 14:16:51 crc kubenswrapper[5116]: I1209 14:16:51.906393 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7tzzc" podStartSLOduration=8.090195835 podStartE2EDuration="26.906360718s" podCreationTimestamp="2025-12-09 14:16:25 +0000 UTC" firstStartedPulling="2025-12-09 14:16:26.505344777 +0000 UTC m=+125.027089575" lastFinishedPulling="2025-12-09 14:16:45.32150966 +0000 UTC m=+143.843254458" observedRunningTime="2025-12-09 14:16:51.902423212 +0000 UTC m=+150.424168010" watchObservedRunningTime="2025-12-09 14:16:51.906360718 +0000 UTC m=+150.428105526" Dec 09 14:16:52 crc kubenswrapper[5116]: I1209 14:16:52.314010 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:52 crc kubenswrapper[5116]: I1209 14:16:52.314089 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:52 crc kubenswrapper[5116]: I1209 14:16:52.774520 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:52 crc kubenswrapper[5116]: I1209 14:16:52.774846 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:52 crc kubenswrapper[5116]: I1209 14:16:52.774947 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:52 crc kubenswrapper[5116]: I1209 14:16:52.775044 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.216491 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.216608 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.216988 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.262238 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.282267 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r96f" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.330307 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.330344 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.369723 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.920543 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.920622 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:53 crc kubenswrapper[5116]: I1209 14:16:53.931705 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.318490 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.318560 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.371913 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.727825 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.727898 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.770862 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.882051 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9ctzk_2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7/kube-multus-additional-cni-plugins/0.log" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.882088 5116 generic.go:358] "Generic (PLEG): container finished" podID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" exitCode=137 Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.882862 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" event={"ID":"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7","Type":"ContainerDied","Data":"f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6"} Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.919642 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:16:54 crc kubenswrapper[5116]: I1209 14:16:54.924662 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.453098 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.453140 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.891204 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9ctzk_2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7/kube-multus-additional-cni-plugins/0.log" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.891382 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" event={"ID":"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7","Type":"ContainerDied","Data":"6ca369308b706e2468cf3bffa857dabcf2c7913ed81e541d5f05c684f44e8920"} Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.891425 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca369308b706e2468cf3bffa857dabcf2c7913ed81e541d5f05c684f44e8920" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.909678 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.909740 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.935625 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9ctzk_2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7/kube-multus-additional-cni-plugins/0.log" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.935716 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.962248 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.967884 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4btj"] Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.968278 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l4btj" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="registry-server" containerID="cri-o://de0ce26a520ddf8d12bbfc3a59e58f6e81f901bc46b2e9db85ff5eda339ed69a" gracePeriod=2 Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.978313 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b7bn\" (UniqueName: \"kubernetes.io/projected/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-kube-api-access-2b7bn\") pod \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.978470 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-cni-sysctl-allowlist\") pod \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.978506 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-ready\") pod \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.978533 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-tuning-conf-dir\") pod \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\" (UID: \"2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7\") " Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.979248 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" (UID: "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.979561 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-ready" (OuterVolumeSpecName: "ready") pod "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" (UID: "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.980395 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" (UID: "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:55 crc kubenswrapper[5116]: I1209 14:16:55.991137 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-kube-api-access-2b7bn" (OuterVolumeSpecName: "kube-api-access-2b7bn") pod "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" (UID: "2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7"). InnerVolumeSpecName "kube-api-access-2b7bn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.079823 5116 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-ready\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.079859 5116 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.080073 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2b7bn\" (UniqueName: \"kubernetes.io/projected/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-kube-api-access-2b7bn\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.080081 5116 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.489144 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7tzzc" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="registry-server" probeResult="failure" output=< Dec 09 14:16:56 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 09 14:16:56 crc kubenswrapper[5116]: > Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.897745 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9ctzk" Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.930098 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9ctzk"] Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.936414 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9ctzk"] Dec 09 14:16:56 crc kubenswrapper[5116]: I1209 14:16:56.958493 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:16:57 crc kubenswrapper[5116]: I1209 14:16:57.563674 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:16:57 crc kubenswrapper[5116]: I1209 14:16:57.756336 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" path="/var/lib/kubelet/pods/2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7/volumes" Dec 09 14:16:57 crc kubenswrapper[5116]: I1209 14:16:57.906147 5116 generic.go:358] "Generic (PLEG): container finished" podID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerID="de0ce26a520ddf8d12bbfc3a59e58f6e81f901bc46b2e9db85ff5eda339ed69a" exitCode=0 Dec 09 14:16:57 crc kubenswrapper[5116]: I1209 14:16:57.906211 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerDied","Data":"de0ce26a520ddf8d12bbfc3a59e58f6e81f901bc46b2e9db85ff5eda339ed69a"} Dec 09 14:16:58 crc kubenswrapper[5116]: I1209 14:16:58.166212 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnkj6"] Dec 09 14:16:58 crc kubenswrapper[5116]: I1209 14:16:58.167133 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnkj6" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="registry-server" containerID="cri-o://23965169f8ecc80f1ceca3dc62e1817dfd5cb2659d406f80de3ac1e9c7423859" gracePeriod=2 Dec 09 14:16:58 crc kubenswrapper[5116]: I1209 14:16:58.369136 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4pxr"] Dec 09 14:16:58 crc kubenswrapper[5116]: I1209 14:16:58.369737 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4pxr" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="registry-server" containerID="cri-o://8c03ce8b55ca4001c3aa944a4c262558c0aa8a775f86787353296af345b85ca4" gracePeriod=2 Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.290483 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.423365 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-catalog-content\") pod \"82099c38-ac0a-4c91-ad38-43c94ec739c1\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.423467 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq4ql\" (UniqueName: \"kubernetes.io/projected/82099c38-ac0a-4c91-ad38-43c94ec739c1-kube-api-access-jq4ql\") pod \"82099c38-ac0a-4c91-ad38-43c94ec739c1\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.423709 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-utilities\") pod \"82099c38-ac0a-4c91-ad38-43c94ec739c1\" (UID: \"82099c38-ac0a-4c91-ad38-43c94ec739c1\") " Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.425458 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-utilities" (OuterVolumeSpecName: "utilities") pod "82099c38-ac0a-4c91-ad38-43c94ec739c1" (UID: "82099c38-ac0a-4c91-ad38-43c94ec739c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.426040 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.436556 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82099c38-ac0a-4c91-ad38-43c94ec739c1-kube-api-access-jq4ql" (OuterVolumeSpecName: "kube-api-access-jq4ql") pod "82099c38-ac0a-4c91-ad38-43c94ec739c1" (UID: "82099c38-ac0a-4c91-ad38-43c94ec739c1"). InnerVolumeSpecName "kube-api-access-jq4ql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.492740 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82099c38-ac0a-4c91-ad38-43c94ec739c1" (UID: "82099c38-ac0a-4c91-ad38-43c94ec739c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.527630 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82099c38-ac0a-4c91-ad38-43c94ec739c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.527675 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq4ql\" (UniqueName: \"kubernetes.io/projected/82099c38-ac0a-4c91-ad38-43c94ec739c1-kube-api-access-jq4ql\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.765740 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38920: no serving certificate available for the kubelet" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.921264 5116 generic.go:358] "Generic (PLEG): container finished" podID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerID="23965169f8ecc80f1ceca3dc62e1817dfd5cb2659d406f80de3ac1e9c7423859" exitCode=0 Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.921368 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerDied","Data":"23965169f8ecc80f1ceca3dc62e1817dfd5cb2659d406f80de3ac1e9c7423859"} Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.923925 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4btj" event={"ID":"82099c38-ac0a-4c91-ad38-43c94ec739c1","Type":"ContainerDied","Data":"c8bf8b84dc0d45b571c537123c1f9f7b630a19ce73d5ab4215d7511e2345b908"} Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.924033 5116 scope.go:117] "RemoveContainer" containerID="de0ce26a520ddf8d12bbfc3a59e58f6e81f901bc46b2e9db85ff5eda339ed69a" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.924071 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4btj" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.926402 5116 generic.go:358] "Generic (PLEG): container finished" podID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerID="8c03ce8b55ca4001c3aa944a4c262558c0aa8a775f86787353296af345b85ca4" exitCode=0 Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.926428 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerDied","Data":"8c03ce8b55ca4001c3aa944a4c262558c0aa8a775f86787353296af345b85ca4"} Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.942500 5116 scope.go:117] "RemoveContainer" containerID="153bc1f7bf8d673a9f723dfd9e6ae78eb0c678a4f56f4945c46ce4feb5705673" Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.952494 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4btj"] Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.956930 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l4btj"] Dec 09 14:16:59 crc kubenswrapper[5116]: I1209 14:16:59.961140 5116 scope.go:117] "RemoveContainer" containerID="7acd6b8afa874324bc268b1fcde5eb081528c9305426836cc9c6fdd3e90400fc" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.319068 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.323717 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.443934 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52p8f\" (UniqueName: \"kubernetes.io/projected/61857eb7-32cc-4317-bb20-d11f7e1c241d-kube-api-access-52p8f\") pod \"61857eb7-32cc-4317-bb20-d11f7e1c241d\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.444028 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-catalog-content\") pod \"a3101038-e5b7-44e7-9b29-61e6976d7da0\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.444095 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-catalog-content\") pod \"61857eb7-32cc-4317-bb20-d11f7e1c241d\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.444130 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-utilities\") pod \"a3101038-e5b7-44e7-9b29-61e6976d7da0\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.444219 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-utilities\") pod \"61857eb7-32cc-4317-bb20-d11f7e1c241d\" (UID: \"61857eb7-32cc-4317-bb20-d11f7e1c241d\") " Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.444256 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv626\" (UniqueName: \"kubernetes.io/projected/a3101038-e5b7-44e7-9b29-61e6976d7da0-kube-api-access-bv626\") pod \"a3101038-e5b7-44e7-9b29-61e6976d7da0\" (UID: \"a3101038-e5b7-44e7-9b29-61e6976d7da0\") " Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.445180 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-utilities" (OuterVolumeSpecName: "utilities") pod "61857eb7-32cc-4317-bb20-d11f7e1c241d" (UID: "61857eb7-32cc-4317-bb20-d11f7e1c241d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.445250 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-utilities" (OuterVolumeSpecName: "utilities") pod "a3101038-e5b7-44e7-9b29-61e6976d7da0" (UID: "a3101038-e5b7-44e7-9b29-61e6976d7da0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.449083 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61857eb7-32cc-4317-bb20-d11f7e1c241d-kube-api-access-52p8f" (OuterVolumeSpecName: "kube-api-access-52p8f") pod "61857eb7-32cc-4317-bb20-d11f7e1c241d" (UID: "61857eb7-32cc-4317-bb20-d11f7e1c241d"). InnerVolumeSpecName "kube-api-access-52p8f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.449268 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3101038-e5b7-44e7-9b29-61e6976d7da0-kube-api-access-bv626" (OuterVolumeSpecName: "kube-api-access-bv626") pod "a3101038-e5b7-44e7-9b29-61e6976d7da0" (UID: "a3101038-e5b7-44e7-9b29-61e6976d7da0"). InnerVolumeSpecName "kube-api-access-bv626". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.455622 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61857eb7-32cc-4317-bb20-d11f7e1c241d" (UID: "61857eb7-32cc-4317-bb20-d11f7e1c241d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.475858 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3101038-e5b7-44e7-9b29-61e6976d7da0" (UID: "a3101038-e5b7-44e7-9b29-61e6976d7da0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.546005 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-52p8f\" (UniqueName: \"kubernetes.io/projected/61857eb7-32cc-4317-bb20-d11f7e1c241d-kube-api-access-52p8f\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.546047 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.546060 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.546072 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3101038-e5b7-44e7-9b29-61e6976d7da0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.546083 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61857eb7-32cc-4317-bb20-d11f7e1c241d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.546094 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bv626\" (UniqueName: \"kubernetes.io/projected/a3101038-e5b7-44e7-9b29-61e6976d7da0-kube-api-access-bv626\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.562702 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bm9z9"] Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.563155 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bm9z9" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="registry-server" containerID="cri-o://96577c449c7dba93227eed6ac91511c750295d3b1b72d97f3912ccb3d81305a4" gracePeriod=2 Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.936478 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnkj6" event={"ID":"a3101038-e5b7-44e7-9b29-61e6976d7da0","Type":"ContainerDied","Data":"dd32427c1d0aa0a0c4f821415c87c66425ec690af440b7fc41ad9bfaab8ebcba"} Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.936594 5116 scope.go:117] "RemoveContainer" containerID="23965169f8ecc80f1ceca3dc62e1817dfd5cb2659d406f80de3ac1e9c7423859" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.937005 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnkj6" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.945173 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4pxr" event={"ID":"61857eb7-32cc-4317-bb20-d11f7e1c241d","Type":"ContainerDied","Data":"1248b2a17b7da1e09f2006071254eea263d0dec0b7ef2c491c7eb690edc33435"} Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.945297 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4pxr" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.975254 5116 scope.go:117] "RemoveContainer" containerID="c226fa9114e29cd897a8b924273e32af5e01a0f717f26faeffcc0399572378ea" Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.988119 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnkj6"] Dec 09 14:17:00 crc kubenswrapper[5116]: I1209 14:17:00.991505 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnkj6"] Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.004381 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4pxr"] Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.007556 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4pxr"] Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.292408 5116 scope.go:117] "RemoveContainer" containerID="bc933b675c58492e5c58bdfe436010522501e121ce05bb48000427ede8265540" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.332030 5116 scope.go:117] "RemoveContainer" containerID="8c03ce8b55ca4001c3aa944a4c262558c0aa8a775f86787353296af345b85ca4" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.351685 5116 scope.go:117] "RemoveContainer" containerID="cd3ae085551423b66d6551ebb262434c051bb5c95a1ea98e280786a4c27105f1" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.666786 5116 scope.go:117] "RemoveContainer" containerID="55be913e63bfc60f7d39a6ddd80745d6c0dc670676b65293df568b3d6c995b84" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.764579 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" path="/var/lib/kubelet/pods/61857eb7-32cc-4317-bb20-d11f7e1c241d/volumes" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.766298 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" path="/var/lib/kubelet/pods/82099c38-ac0a-4c91-ad38-43c94ec739c1/volumes" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.767740 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" path="/var/lib/kubelet/pods/a3101038-e5b7-44e7-9b29-61e6976d7da0/volumes" Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.961678 5116 generic.go:358] "Generic (PLEG): container finished" podID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerID="96577c449c7dba93227eed6ac91511c750295d3b1b72d97f3912ccb3d81305a4" exitCode=0 Dec 09 14:17:01 crc kubenswrapper[5116]: I1209 14:17:01.961757 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerDied","Data":"96577c449c7dba93227eed6ac91511c750295d3b1b72d97f3912ccb3d81305a4"} Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.357645 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359507 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="extract-content" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359541 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="extract-content" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359561 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="extract-utilities" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359571 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="extract-utilities" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359582 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="daecaa85-80e4-48f0-b423-85dd42d5801e" containerName="pruner" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359590 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="daecaa85-80e4-48f0-b423-85dd42d5801e" containerName="pruner" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359605 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="extract-content" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359613 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="extract-content" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359629 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="extract-utilities" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359637 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="extract-utilities" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359652 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359661 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359684 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="extract-content" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359692 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="extract-content" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359711 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359720 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359738 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="extract-utilities" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359745 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="extract-utilities" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359758 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359765 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359776 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359788 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.359990 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3101038-e5b7-44e7-9b29-61e6976d7da0" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.360009 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="61857eb7-32cc-4317-bb20-d11f7e1c241d" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.360024 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="daecaa85-80e4-48f0-b423-85dd42d5801e" containerName="pruner" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.360041 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="2958d2e1-a5c5-4c6d-a051-0b3bb631fcc7" containerName="kube-multus-additional-cni-plugins" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.360051 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="82099c38-ac0a-4c91-ad38-43c94ec739c1" containerName="registry-server" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.631991 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.795183 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cjlc\" (UniqueName: \"kubernetes.io/projected/a4999e45-13d9-4e82-ab56-f1022ef697a9-kube-api-access-5cjlc\") pod \"a4999e45-13d9-4e82-ab56-f1022ef697a9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.795239 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-catalog-content\") pod \"a4999e45-13d9-4e82-ab56-f1022ef697a9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.795309 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-utilities\") pod \"a4999e45-13d9-4e82-ab56-f1022ef697a9\" (UID: \"a4999e45-13d9-4e82-ab56-f1022ef697a9\") " Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.796617 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-utilities" (OuterVolumeSpecName: "utilities") pod "a4999e45-13d9-4e82-ab56-f1022ef697a9" (UID: "a4999e45-13d9-4e82-ab56-f1022ef697a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.817840 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4999e45-13d9-4e82-ab56-f1022ef697a9-kube-api-access-5cjlc" (OuterVolumeSpecName: "kube-api-access-5cjlc") pod "a4999e45-13d9-4e82-ab56-f1022ef697a9" (UID: "a4999e45-13d9-4e82-ab56-f1022ef697a9"). InnerVolumeSpecName "kube-api-access-5cjlc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.892564 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4999e45-13d9-4e82-ab56-f1022ef697a9" (UID: "a4999e45-13d9-4e82-ab56-f1022ef697a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.896429 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cjlc\" (UniqueName: \"kubernetes.io/projected/a4999e45-13d9-4e82-ab56-f1022ef697a9-kube-api-access-5cjlc\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.896492 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:03 crc kubenswrapper[5116]: I1209 14:17:03.896510 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4999e45-13d9-4e82-ab56-f1022ef697a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.196516 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.196550 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bm9z9" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.205335 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.206831 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.209554 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.209617 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.209645 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bm9z9" event={"ID":"a4999e45-13d9-4e82-ab56-f1022ef697a9","Type":"ContainerDied","Data":"be0408977a2e87cc3434a0747285071030f01d5c56272d0272124eb660ec9401"} Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.210026 5116 scope.go:117] "RemoveContainer" containerID="96577c449c7dba93227eed6ac91511c750295d3b1b72d97f3912ccb3d81305a4" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.253171 5116 scope.go:117] "RemoveContainer" containerID="c0a7cf3f54063ad06cab78a3512bd53901754757613fa756d3316b0647a51412" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.268137 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bm9z9"] Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.271580 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bm9z9"] Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.291468 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.328208 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.328882 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.430157 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.430319 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.430243 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.434711 5116 scope.go:117] "RemoveContainer" containerID="fd9464a74f23c476b72e25d719ff367ac7919eb3a1b961d1f6bfd7d98199d2f7" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.454448 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.522656 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.707118 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:17:06 crc kubenswrapper[5116]: I1209 14:17:06.995899 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ad9a0ce2-8836-4549-befe-f3cb6957cdda","Type":"ContainerStarted","Data":"d096bd3aed3c9c670e5e5150ef673db24f8f9a28bba37bd3e11b46dba43aed03"} Dec 09 14:17:07 crc kubenswrapper[5116]: I1209 14:17:07.755126 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" path="/var/lib/kubelet/pods/a4999e45-13d9-4e82-ab56-f1022ef697a9/volumes" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.004350 5116 generic.go:358] "Generic (PLEG): container finished" podID="ad9a0ce2-8836-4549-befe-f3cb6957cdda" containerID="38ce62b618aa300a3b7ed04a64f60636c6636a53b3824628d7db9e2315dc21c0" exitCode=0 Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.004594 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ad9a0ce2-8836-4549-befe-f3cb6957cdda","Type":"ContainerDied","Data":"38ce62b618aa300a3b7ed04a64f60636c6636a53b3824628d7db9e2315dc21c0"} Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.152885 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153625 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="registry-server" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153643 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="registry-server" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153666 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="extract-content" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153674 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="extract-content" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153691 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="extract-utilities" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153699 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="extract-utilities" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.153807 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4999e45-13d9-4e82-ab56-f1022ef697a9" containerName="registry-server" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.168455 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.168599 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.177395 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.177642 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-var-lock\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.177740 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba74808-b196-40d8-b39f-c46f64b1ef0a-kube-api-access\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.278316 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-var-lock\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.278453 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba74808-b196-40d8-b39f-c46f64b1ef0a-kube-api-access\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.278387 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-var-lock\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.278535 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.278711 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.298152 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba74808-b196-40d8-b39f-c46f64b1ef0a-kube-api-access\") pod \"installer-12-crc\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.496810 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:08 crc kubenswrapper[5116]: I1209 14:17:08.679115 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.011403 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"aba74808-b196-40d8-b39f-c46f64b1ef0a","Type":"ContainerStarted","Data":"808de9ebcf6859e2171230bee104387edce580709a2e701b93540e2e7cce0a20"} Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.209692 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.289209 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kube-api-access\") pod \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.289296 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kubelet-dir\") pod \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\" (UID: \"ad9a0ce2-8836-4549-befe-f3cb6957cdda\") " Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.289472 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ad9a0ce2-8836-4549-befe-f3cb6957cdda" (UID: "ad9a0ce2-8836-4549-befe-f3cb6957cdda"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.295810 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ad9a0ce2-8836-4549-befe-f3cb6957cdda" (UID: "ad9a0ce2-8836-4549-befe-f3cb6957cdda"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.390554 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:09 crc kubenswrapper[5116]: I1209 14:17:09.390591 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad9a0ce2-8836-4549-befe-f3cb6957cdda-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:10 crc kubenswrapper[5116]: I1209 14:17:10.020905 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ad9a0ce2-8836-4549-befe-f3cb6957cdda","Type":"ContainerDied","Data":"d096bd3aed3c9c670e5e5150ef673db24f8f9a28bba37bd3e11b46dba43aed03"} Dec 09 14:17:10 crc kubenswrapper[5116]: I1209 14:17:10.021284 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d096bd3aed3c9c670e5e5150ef673db24f8f9a28bba37bd3e11b46dba43aed03" Dec 09 14:17:10 crc kubenswrapper[5116]: I1209 14:17:10.021748 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:17:10 crc kubenswrapper[5116]: I1209 14:17:10.023128 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"aba74808-b196-40d8-b39f-c46f64b1ef0a","Type":"ContainerStarted","Data":"a18829c3bde8e43685617bcbdaf732eda6c595ab7811b54414f133165ed013ca"} Dec 09 14:17:10 crc kubenswrapper[5116]: I1209 14:17:10.045846 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.045815309 podStartE2EDuration="2.045815309s" podCreationTimestamp="2025-12-09 14:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:17:10.039883839 +0000 UTC m=+168.561628637" watchObservedRunningTime="2025-12-09 14:17:10.045815309 +0000 UTC m=+168.567560147" Dec 09 14:17:40 crc kubenswrapper[5116]: I1209 14:17:40.760185 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50588: no serving certificate available for the kubelet" Dec 09 14:17:46 crc kubenswrapper[5116]: I1209 14:17:46.968845 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:17:46 crc kubenswrapper[5116]: I1209 14:17:46.972244 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad9a0ce2-8836-4549-befe-f3cb6957cdda" containerName="pruner" Dec 09 14:17:46 crc kubenswrapper[5116]: I1209 14:17:46.972420 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9a0ce2-8836-4549-befe-f3cb6957cdda" containerName="pruner" Dec 09 14:17:46 crc kubenswrapper[5116]: I1209 14:17:46.972755 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad9a0ce2-8836-4549-befe-f3cb6957cdda" containerName="pruner" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.044181 5116 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.044318 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.044550 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.044924 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846" gracePeriod=15 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.044904 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596" gracePeriod=15 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045035 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045329 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045355 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045056 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58" gracePeriod=15 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045364 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045459 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045069 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14" gracePeriod=15 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045472 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045656 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045673 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045699 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045706 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045728 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045737 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045768 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045775 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045788 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045796 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045815 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045823 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.045044 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a" gracePeriod=15 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046042 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046055 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046064 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046072 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046083 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046094 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046103 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046243 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046252 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046385 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.046418 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.050025 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.086770 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148502 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148580 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148642 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148675 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148771 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148840 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148875 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.148947 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.149036 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.149110 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.247097 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.248384 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.248914 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58" exitCode=0 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.248934 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846" exitCode=0 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.248940 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14" exitCode=0 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.248946 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a" exitCode=2 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.249007 5116 scope.go:117] "RemoveContainer" containerID="97df2c7ccb6882647716f7501a4d1d2e789da44437c83f30b373008b2af5a176" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.249941 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250003 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250026 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250033 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250086 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250156 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250191 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250228 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250269 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250287 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250325 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250355 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250345 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250405 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250424 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250466 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250473 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250503 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250518 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.250736 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.252065 5116 generic.go:358] "Generic (PLEG): container finished" podID="aba74808-b196-40d8-b39f-c46f64b1ef0a" containerID="a18829c3bde8e43685617bcbdaf732eda6c595ab7811b54414f133165ed013ca" exitCode=0 Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.252124 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"aba74808-b196-40d8-b39f-c46f64b1ef0a","Type":"ContainerDied","Data":"a18829c3bde8e43685617bcbdaf732eda6c595ab7811b54414f133165ed013ca"} Dec 09 14:17:47 crc kubenswrapper[5116]: I1209 14:17:47.254420 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.262232 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.556826 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.557503 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.669586 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-var-lock\") pod \"aba74808-b196-40d8-b39f-c46f64b1ef0a\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.669657 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-kubelet-dir\") pod \"aba74808-b196-40d8-b39f-c46f64b1ef0a\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.669704 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-var-lock" (OuterVolumeSpecName: "var-lock") pod "aba74808-b196-40d8-b39f-c46f64b1ef0a" (UID: "aba74808-b196-40d8-b39f-c46f64b1ef0a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.669769 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba74808-b196-40d8-b39f-c46f64b1ef0a-kube-api-access\") pod \"aba74808-b196-40d8-b39f-c46f64b1ef0a\" (UID: \"aba74808-b196-40d8-b39f-c46f64b1ef0a\") " Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.669857 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aba74808-b196-40d8-b39f-c46f64b1ef0a" (UID: "aba74808-b196-40d8-b39f-c46f64b1ef0a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.670194 5116 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.670216 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba74808-b196-40d8-b39f-c46f64b1ef0a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.679809 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba74808-b196-40d8-b39f-c46f64b1ef0a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aba74808-b196-40d8-b39f-c46f64b1ef0a" (UID: "aba74808-b196-40d8-b39f-c46f64b1ef0a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:48 crc kubenswrapper[5116]: I1209 14:17:48.771814 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba74808-b196-40d8-b39f-c46f64b1ef0a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.271662 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"aba74808-b196-40d8-b39f-c46f64b1ef0a","Type":"ContainerDied","Data":"808de9ebcf6859e2171230bee104387edce580709a2e701b93540e2e7cce0a20"} Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.272039 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808de9ebcf6859e2171230bee104387edce580709a2e701b93540e2e7cce0a20" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.271705 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.288844 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.437073 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.442267 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.443027 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.443604 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496056 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496114 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496134 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496171 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496189 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496263 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496389 5116 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496418 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.496439 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.497043 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.502263 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.597432 5116 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.597482 5116 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.597570 5116 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.597591 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:49 crc kubenswrapper[5116]: I1209 14:17:49.761185 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.284248 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.286508 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596" exitCode=0 Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.286747 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.286785 5116 scope.go:117] "RemoveContainer" containerID="a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.287316 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.287543 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.291714 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.291905 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.310287 5116 scope.go:117] "RemoveContainer" containerID="733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.327663 5116 scope.go:117] "RemoveContainer" containerID="c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.343174 5116 scope.go:117] "RemoveContainer" containerID="ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.360417 5116 scope.go:117] "RemoveContainer" containerID="4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.377626 5116 scope.go:117] "RemoveContainer" containerID="93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.447741 5116 scope.go:117] "RemoveContainer" containerID="a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58" Dec 09 14:17:50 crc kubenswrapper[5116]: E1209 14:17:50.448224 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58\": container with ID starting with a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58 not found: ID does not exist" containerID="a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.448254 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58"} err="failed to get container status \"a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58\": rpc error: code = NotFound desc = could not find container \"a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58\": container with ID starting with a829c549e9817f776a9fed59ad466d4f4fdfe302fb056679bff4bfe5c1e89c58 not found: ID does not exist" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.448298 5116 scope.go:117] "RemoveContainer" containerID="733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846" Dec 09 14:17:50 crc kubenswrapper[5116]: E1209 14:17:50.449094 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\": container with ID starting with 733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846 not found: ID does not exist" containerID="733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.449241 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846"} err="failed to get container status \"733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\": rpc error: code = NotFound desc = could not find container \"733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846\": container with ID starting with 733c8d7fb7316d4fef526e963f79fe877876ae80873018aa000e80a93c638846 not found: ID does not exist" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.449314 5116 scope.go:117] "RemoveContainer" containerID="c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14" Dec 09 14:17:50 crc kubenswrapper[5116]: E1209 14:17:50.449691 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\": container with ID starting with c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14 not found: ID does not exist" containerID="c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.450385 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14"} err="failed to get container status \"c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\": rpc error: code = NotFound desc = could not find container \"c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14\": container with ID starting with c6c56e69077b2805f55f8375752bc64eb50a86c5cd90de8a4d3494e17ad40a14 not found: ID does not exist" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.450543 5116 scope.go:117] "RemoveContainer" containerID="ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a" Dec 09 14:17:50 crc kubenswrapper[5116]: E1209 14:17:50.450860 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\": container with ID starting with ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a not found: ID does not exist" containerID="ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.451005 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a"} err="failed to get container status \"ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\": rpc error: code = NotFound desc = could not find container \"ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a\": container with ID starting with ce7be63425d9c3b7334c61d411d0cdd8a8e41e4119edd20244eebd6e763f914a not found: ID does not exist" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.451146 5116 scope.go:117] "RemoveContainer" containerID="4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596" Dec 09 14:17:50 crc kubenswrapper[5116]: E1209 14:17:50.451754 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\": container with ID starting with 4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596 not found: ID does not exist" containerID="4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.451804 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596"} err="failed to get container status \"4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\": rpc error: code = NotFound desc = could not find container \"4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596\": container with ID starting with 4d509fe7bf595d4327e8d706fd6a220cf0156ec09488ebf301336c25e683c596 not found: ID does not exist" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.451836 5116 scope.go:117] "RemoveContainer" containerID="93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851" Dec 09 14:17:50 crc kubenswrapper[5116]: E1209 14:17:50.452478 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\": container with ID starting with 93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851 not found: ID does not exist" containerID="93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851" Dec 09 14:17:50 crc kubenswrapper[5116]: I1209 14:17:50.452512 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851"} err="failed to get container status \"93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\": rpc error: code = NotFound desc = could not find container \"93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851\": container with ID starting with 93cf983901b6cb39e8844b91ef72dc9c59695ea5268be15c08e385fcfc972851 not found: ID does not exist" Dec 09 14:17:51 crc kubenswrapper[5116]: I1209 14:17:51.755504 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:51 crc kubenswrapper[5116]: I1209 14:17:51.756135 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:52 crc kubenswrapper[5116]: E1209 14:17:52.088477 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:52 crc kubenswrapper[5116]: I1209 14:17:52.090109 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:52 crc kubenswrapper[5116]: E1209 14:17:52.122737 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f91c92967170c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:17:52.121599756 +0000 UTC m=+210.643344554,LastTimestamp:2025-12-09 14:17:52.121599756 +0000 UTC m=+210.643344554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:17:52 crc kubenswrapper[5116]: I1209 14:17:52.302206 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"7bf4d05484b1f9b10cc3bcedde52ebc1c8a6924eeccbf9523ada083c49066705"} Dec 09 14:17:53 crc kubenswrapper[5116]: I1209 14:17:53.308457 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"e1c5ec2e3d21197bb164739dfcd71263dd2311f3a823a0cfc87f0068dbb2f719"} Dec 09 14:17:53 crc kubenswrapper[5116]: I1209 14:17:53.308679 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:53 crc kubenswrapper[5116]: E1209 14:17:53.310727 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:53 crc kubenswrapper[5116]: I1209 14:17:53.310743 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:54 crc kubenswrapper[5116]: I1209 14:17:54.315310 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:54 crc kubenswrapper[5116]: E1209 14:17:54.317043 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.558491 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.560259 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.560620 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.560942 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.561286 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:56 crc kubenswrapper[5116]: I1209 14:17:56.561318 5116 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.561728 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Dec 09 14:17:56 crc kubenswrapper[5116]: E1209 14:17:56.762580 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Dec 09 14:17:57 crc kubenswrapper[5116]: E1209 14:17:57.164186 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Dec 09 14:17:57 crc kubenswrapper[5116]: E1209 14:17:57.965475 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Dec 09 14:17:58 crc kubenswrapper[5116]: I1209 14:17:58.748483 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:58 crc kubenswrapper[5116]: I1209 14:17:58.749865 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:58 crc kubenswrapper[5116]: I1209 14:17:58.772987 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:17:58 crc kubenswrapper[5116]: I1209 14:17:58.773038 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:17:58 crc kubenswrapper[5116]: E1209 14:17:58.773702 5116 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:58 crc kubenswrapper[5116]: I1209 14:17:58.774078 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:59 crc kubenswrapper[5116]: I1209 14:17:59.347105 5116 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="b989990616fb46a8f0a8c233251503e344da06a00378d745b4e1cfeb447e1549" exitCode=0 Dec 09 14:17:59 crc kubenswrapper[5116]: I1209 14:17:59.347194 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"b989990616fb46a8f0a8c233251503e344da06a00378d745b4e1cfeb447e1549"} Dec 09 14:17:59 crc kubenswrapper[5116]: I1209 14:17:59.347230 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a15de825e020afdb0f8fc0edff558331478c9ae4ebd070267dd50f730d40b285"} Dec 09 14:17:59 crc kubenswrapper[5116]: I1209 14:17:59.347518 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:17:59 crc kubenswrapper[5116]: I1209 14:17:59.347532 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:17:59 crc kubenswrapper[5116]: E1209 14:17:59.347942 5116 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:17:59 crc kubenswrapper[5116]: I1209 14:17:59.347972 5116 status_manager.go:895] "Failed to get status for pod" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Dec 09 14:17:59 crc kubenswrapper[5116]: E1209 14:17:59.567560 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Dec 09 14:18:00 crc kubenswrapper[5116]: I1209 14:18:00.361729 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"07a3d29ad9b7a38db7aca50dc6455f2c109ea526120c263c39d9142cfe73f3b4"} Dec 09 14:18:00 crc kubenswrapper[5116]: I1209 14:18:00.362062 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3645c75e25bcd5be1cb81eccdbdad9ed9f54d8c1beb30b46ac798b10888c2574"} Dec 09 14:18:00 crc kubenswrapper[5116]: I1209 14:18:00.362079 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"06e119684c291a7bddb5d216335704c6ace4be121fd7b9ff3dc50b5f3ffe278f"} Dec 09 14:18:00 crc kubenswrapper[5116]: I1209 14:18:00.362093 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c3263910b8d28ed2b3c52e56aaddb41ed7bff12928c6f57eff71f806bfe3b708"} Dec 09 14:18:01 crc kubenswrapper[5116]: I1209 14:18:01.371399 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"87620400a8b905374f077c41d302ba758ae364d3a1a7e570a82506f0251fd650"} Dec 09 14:18:01 crc kubenswrapper[5116]: I1209 14:18:01.371542 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:01 crc kubenswrapper[5116]: I1209 14:18:01.371702 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:18:01 crc kubenswrapper[5116]: I1209 14:18:01.371731 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:18:02 crc kubenswrapper[5116]: I1209 14:18:02.127062 5116 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 14:18:02 crc kubenswrapper[5116]: I1209 14:18:02.127139 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 14:18:02 crc kubenswrapper[5116]: I1209 14:18:02.379391 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:18:02 crc kubenswrapper[5116]: I1209 14:18:02.379449 5116 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7" exitCode=1 Dec 09 14:18:02 crc kubenswrapper[5116]: I1209 14:18:02.379552 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7"} Dec 09 14:18:02 crc kubenswrapper[5116]: I1209 14:18:02.380212 5116 scope.go:117] "RemoveContainer" containerID="71f3e13f5f5251991ddf9d5eb8af8f46d4e7a64d79f3ca36ba7a0ffeca6254a7" Dec 09 14:18:03 crc kubenswrapper[5116]: I1209 14:18:03.389671 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:18:03 crc kubenswrapper[5116]: I1209 14:18:03.390150 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1ea310e6a394bdb093efa8b602d12cce78139d9f5a05f04c1f90941615a13b41"} Dec 09 14:18:03 crc kubenswrapper[5116]: I1209 14:18:03.775438 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:03 crc kubenswrapper[5116]: I1209 14:18:03.775505 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:03 crc kubenswrapper[5116]: I1209 14:18:03.785848 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.382337 5116 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.382371 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.415707 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.415740 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.420721 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.425098 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="bca7e15c-dcbe-4457-882f-516ccb70eb37" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.955108 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:18:06 crc kubenswrapper[5116]: I1209 14:18:06.963804 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:18:07 crc kubenswrapper[5116]: I1209 14:18:07.412783 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:18:07 crc kubenswrapper[5116]: I1209 14:18:07.412828 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="015770e1-5eef-4f29-9f60-2798e4e1ed27" Dec 09 14:18:07 crc kubenswrapper[5116]: I1209 14:18:07.413266 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:18:07 crc kubenswrapper[5116]: I1209 14:18:07.415122 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="bca7e15c-dcbe-4457-882f-516ccb70eb37" Dec 09 14:18:13 crc kubenswrapper[5116]: I1209 14:18:13.088491 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 09 14:18:14 crc kubenswrapper[5116]: I1209 14:18:14.002351 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 09 14:18:16 crc kubenswrapper[5116]: I1209 14:18:16.414617 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 09 14:18:16 crc kubenswrapper[5116]: I1209 14:18:16.607267 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 09 14:18:16 crc kubenswrapper[5116]: I1209 14:18:16.840667 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 09 14:18:17 crc kubenswrapper[5116]: I1209 14:18:17.029891 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 09 14:18:17 crc kubenswrapper[5116]: I1209 14:18:17.419557 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 09 14:18:17 crc kubenswrapper[5116]: I1209 14:18:17.562453 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 09 14:18:18 crc kubenswrapper[5116]: I1209 14:18:18.363264 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 09 14:18:18 crc kubenswrapper[5116]: I1209 14:18:18.390343 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 09 14:18:18 crc kubenswrapper[5116]: I1209 14:18:18.423778 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:18:18 crc kubenswrapper[5116]: I1209 14:18:18.509602 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 09 14:18:19 crc kubenswrapper[5116]: I1209 14:18:19.002747 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 09 14:18:19 crc kubenswrapper[5116]: I1209 14:18:19.402938 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 09 14:18:19 crc kubenswrapper[5116]: I1209 14:18:19.468681 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:19 crc kubenswrapper[5116]: I1209 14:18:19.810996 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 09 14:18:19 crc kubenswrapper[5116]: I1209 14:18:19.818425 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 09 14:18:19 crc kubenswrapper[5116]: I1209 14:18:19.951322 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.009684 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.064048 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.360540 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.382567 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.481382 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.594788 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.804351 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.808672 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.813282 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.825431 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.854464 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.862253 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:18:20 crc kubenswrapper[5116]: I1209 14:18:20.995512 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.034671 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.139658 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.147883 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.163107 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.242404 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.293412 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.429323 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.429352 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.463379 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.626384 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.659308 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.669807 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.826566 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.837514 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.851675 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.864380 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.872086 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 14:18:21 crc kubenswrapper[5116]: I1209 14:18:21.898071 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.035281 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.115397 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.168105 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.168171 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.342817 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.378705 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.463582 5116 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.466887 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.470063 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.470118 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.478265 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.478743 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.489043 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.495832 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.495805819 podStartE2EDuration="16.495805819s" podCreationTimestamp="2025-12-09 14:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:18:22.493302261 +0000 UTC m=+241.015047089" watchObservedRunningTime="2025-12-09 14:18:22.495805819 +0000 UTC m=+241.017550667" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.507894 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.538335 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.591903 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.671431 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.756664 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.768505 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.905513 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.909455 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.943979 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 09 14:18:22 crc kubenswrapper[5116]: I1209 14:18:22.954881 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.044114 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.075764 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.100000 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.180114 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.192744 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.211696 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.279323 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.287466 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.316623 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.401355 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.589518 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.658974 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.709141 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.968284 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.981162 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 09 14:18:23 crc kubenswrapper[5116]: I1209 14:18:23.987494 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.035735 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.037144 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.046618 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.094707 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.155575 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.163280 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.220753 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.229836 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.237032 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.285447 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.309212 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.355459 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.416758 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.469655 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.610608 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.683847 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.706283 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.840728 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.881502 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.916449 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.918943 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.991123 5116 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:18:24 crc kubenswrapper[5116]: I1209 14:18:24.993203 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.068729 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.168303 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.266812 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.429499 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.437574 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.464969 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.512517 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.562208 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.562225 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.590909 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.594324 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.648154 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.651904 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.669901 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.730301 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.762435 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:25 crc kubenswrapper[5116]: I1209 14:18:25.774801 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.005118 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.019387 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.055106 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.108848 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.483375 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.524194 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.543501 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.565630 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.605893 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.640381 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.693701 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.861352 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 09 14:18:26 crc kubenswrapper[5116]: I1209 14:18:26.921306 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.103762 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.123768 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.150696 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.246266 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.278811 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.279668 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.307219 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.325797 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.430612 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.469144 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.533015 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.706209 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.714794 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.818761 5116 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.819948 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.847611 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.853724 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.888632 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.901158 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.920343 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.957714 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 09 14:18:27 crc kubenswrapper[5116]: I1209 14:18:27.986260 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.023197 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.177359 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.179165 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.207758 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.305745 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.421714 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.434793 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.488227 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.495780 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.529862 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.572329 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.627712 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38752: no serving certificate available for the kubelet" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.677765 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.681598 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.710617 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.719807 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.742521 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.793667 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.870094 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.872219 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 14:18:28 crc kubenswrapper[5116]: I1209 14:18:28.972163 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.011201 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.025367 5116 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.025635 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://e1c5ec2e3d21197bb164739dfcd71263dd2311f3a823a0cfc87f0068dbb2f719" gracePeriod=5 Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.048228 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.114487 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.117241 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.185501 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.197799 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.209293 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.248765 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.319728 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.420362 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.511622 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.536073 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.576918 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.643673 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.695456 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.723527 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.736134 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.790615 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.824245 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.850427 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.855715 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.858946 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.944083 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 09 14:18:29 crc kubenswrapper[5116]: I1209 14:18:29.973988 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.042034 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.081200 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.089327 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.097197 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.165851 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.220762 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.234211 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.240874 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.315886 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.328014 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.341040 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.399756 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.424929 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.645687 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.772878 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.928827 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 09 14:18:30 crc kubenswrapper[5116]: I1209 14:18:30.947775 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.027641 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.122682 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.130612 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.161173 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.280220 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.316056 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.489773 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.771477 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.788890 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.836237 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.964848 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 09 14:18:31 crc kubenswrapper[5116]: I1209 14:18:31.995636 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.023363 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.056901 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.075212 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.186550 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.260247 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.326254 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.364180 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.388780 5116 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.536124 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.668690 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.862551 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.870844 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.884504 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:18:32 crc kubenswrapper[5116]: I1209 14:18:32.978907 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 14:18:33 crc kubenswrapper[5116]: I1209 14:18:33.079025 5116 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:18:33 crc kubenswrapper[5116]: I1209 14:18:33.326664 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 09 14:18:33 crc kubenswrapper[5116]: I1209 14:18:33.501934 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 09 14:18:33 crc kubenswrapper[5116]: I1209 14:18:33.737056 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 09 14:18:33 crc kubenswrapper[5116]: I1209 14:18:33.855359 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 09 14:18:33 crc kubenswrapper[5116]: I1209 14:18:33.887616 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.268727 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.548698 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.584276 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.584339 5116 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="e1c5ec2e3d21197bb164739dfcd71263dd2311f3a823a0cfc87f0068dbb2f719" exitCode=137 Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.584543 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf4d05484b1f9b10cc3bcedde52ebc1c8a6924eeccbf9523ada083c49066705" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.643215 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.643292 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.644674 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.820920 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.821554 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.821812 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.822096 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.822341 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.821081 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.821664 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.821861 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.822135 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.824135 5116 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.824427 5116 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.824471 5116 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.824495 5116 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.839593 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.925212 5116 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:34 crc kubenswrapper[5116]: I1209 14:18:34.986210 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 14:18:35 crc kubenswrapper[5116]: I1209 14:18:35.591461 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:18:35 crc kubenswrapper[5116]: I1209 14:18:35.609358 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 09 14:18:35 crc kubenswrapper[5116]: I1209 14:18:35.737118 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:18:35 crc kubenswrapper[5116]: I1209 14:18:35.760033 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 09 14:18:51 crc kubenswrapper[5116]: I1209 14:18:51.692790 5116 generic.go:358] "Generic (PLEG): container finished" podID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerID="5070b77cfb97d30c9e7ad08cd695357d30e39f054753020b5a144725d6608a53" exitCode=0 Dec 09 14:18:51 crc kubenswrapper[5116]: I1209 14:18:51.692863 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" event={"ID":"42fe2705-f9ee-4e26-8e56-2730ba8f6196","Type":"ContainerDied","Data":"5070b77cfb97d30c9e7ad08cd695357d30e39f054753020b5a144725d6608a53"} Dec 09 14:18:51 crc kubenswrapper[5116]: I1209 14:18:51.694114 5116 scope.go:117] "RemoveContainer" containerID="5070b77cfb97d30c9e7ad08cd695357d30e39f054753020b5a144725d6608a53" Dec 09 14:18:52 crc kubenswrapper[5116]: I1209 14:18:52.167673 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:18:52 crc kubenswrapper[5116]: I1209 14:18:52.168130 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:18:52 crc kubenswrapper[5116]: I1209 14:18:52.268326 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:18:52 crc kubenswrapper[5116]: I1209 14:18:52.703319 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" event={"ID":"42fe2705-f9ee-4e26-8e56-2730ba8f6196","Type":"ContainerStarted","Data":"3e4de743939b2a7e05ae56ebb593688fa542c53cbfecbb6dc0d476c2a4b1fb1a"} Dec 09 14:18:52 crc kubenswrapper[5116]: I1209 14:18:52.703603 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:18:52 crc kubenswrapper[5116]: I1209 14:18:52.705072 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:19:02 crc kubenswrapper[5116]: I1209 14:19:02.707163 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50870: no serving certificate available for the kubelet" Dec 09 14:19:10 crc kubenswrapper[5116]: I1209 14:19:10.914534 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g2vff"] Dec 09 14:19:10 crc kubenswrapper[5116]: I1209 14:19:10.915090 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" podUID="4a6214c5-1554-43a3-82d3-65532d7a79a4" containerName="controller-manager" containerID="cri-o://309129def5e3a28bc2f0c34173fbf5e6d30f1829ce763594e60dc7527651488b" gracePeriod=30 Dec 09 14:19:10 crc kubenswrapper[5116]: I1209 14:19:10.925796 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl"] Dec 09 14:19:10 crc kubenswrapper[5116]: I1209 14:19:10.926089 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" podUID="3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" containerName="route-controller-manager" containerID="cri-o://5383d2819e812ea3bf8e733f949d000d14f334b84b7920e805d6f22b8ee73488" gracePeriod=30 Dec 09 14:19:11 crc kubenswrapper[5116]: I1209 14:19:11.801463 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-8k9f4"] Dec 09 14:19:11 crc kubenswrapper[5116]: I1209 14:19:11.819164 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" event={"ID":"4a6214c5-1554-43a3-82d3-65532d7a79a4","Type":"ContainerDied","Data":"309129def5e3a28bc2f0c34173fbf5e6d30f1829ce763594e60dc7527651488b"} Dec 09 14:19:11 crc kubenswrapper[5116]: I1209 14:19:11.819124 5116 generic.go:358] "Generic (PLEG): container finished" podID="4a6214c5-1554-43a3-82d3-65532d7a79a4" containerID="309129def5e3a28bc2f0c34173fbf5e6d30f1829ce763594e60dc7527651488b" exitCode=0 Dec 09 14:19:11 crc kubenswrapper[5116]: I1209 14:19:11.826844 5116 generic.go:358] "Generic (PLEG): container finished" podID="3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" containerID="5383d2819e812ea3bf8e733f949d000d14f334b84b7920e805d6f22b8ee73488" exitCode=0 Dec 09 14:19:11 crc kubenswrapper[5116]: I1209 14:19:11.826936 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" event={"ID":"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2","Type":"ContainerDied","Data":"5383d2819e812ea3bf8e733f949d000d14f334b84b7920e805d6f22b8ee73488"} Dec 09 14:19:11 crc kubenswrapper[5116]: I1209 14:19:11.989295 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.018688 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf"] Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019333 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019358 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019370 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" containerName="installer" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019381 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" containerName="installer" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019413 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" containerName="route-controller-manager" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019421 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" containerName="route-controller-manager" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019536 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="aba74808-b196-40d8-b39f-c46f64b1ef0a" containerName="installer" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019553 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.019561 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" containerName="route-controller-manager" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.046016 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132650 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4a6214c5-1554-43a3-82d3-65532d7a79a4-tmp\") pod \"4a6214c5-1554-43a3-82d3-65532d7a79a4\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132726 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-config\") pod \"4a6214c5-1554-43a3-82d3-65532d7a79a4\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132753 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-config\") pod \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132807 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-serving-cert\") pod \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132835 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-client-ca\") pod \"4a6214c5-1554-43a3-82d3-65532d7a79a4\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132879 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-proxy-ca-bundles\") pod \"4a6214c5-1554-43a3-82d3-65532d7a79a4\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.132944 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6214c5-1554-43a3-82d3-65532d7a79a4-serving-cert\") pod \"4a6214c5-1554-43a3-82d3-65532d7a79a4\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133047 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ptb\" (UniqueName: \"kubernetes.io/projected/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-kube-api-access-w6ptb\") pod \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133146 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-tmp\") pod \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133218 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-client-ca\") pod \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\" (UID: \"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133259 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25d2\" (UniqueName: \"kubernetes.io/projected/4a6214c5-1554-43a3-82d3-65532d7a79a4-kube-api-access-p25d2\") pod \"4a6214c5-1554-43a3-82d3-65532d7a79a4\" (UID: \"4a6214c5-1554-43a3-82d3-65532d7a79a4\") " Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133597 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-tmp" (OuterVolumeSpecName: "tmp") pod "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" (UID: "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133716 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-config" (OuterVolumeSpecName: "config") pod "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" (UID: "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133728 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-client-ca" (OuterVolumeSpecName: "client-ca") pod "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" (UID: "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133882 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a6214c5-1554-43a3-82d3-65532d7a79a4" (UID: "4a6214c5-1554-43a3-82d3-65532d7a79a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134090 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6214c5-1554-43a3-82d3-65532d7a79a4-tmp" (OuterVolumeSpecName: "tmp") pod "4a6214c5-1554-43a3-82d3-65532d7a79a4" (UID: "4a6214c5-1554-43a3-82d3-65532d7a79a4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.133996 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a6214c5-1554-43a3-82d3-65532d7a79a4" (UID: "4a6214c5-1554-43a3-82d3-65532d7a79a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134300 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134322 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134373 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4a6214c5-1554-43a3-82d3-65532d7a79a4-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134384 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134392 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134400 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.134741 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-config" (OuterVolumeSpecName: "config") pod "4a6214c5-1554-43a3-82d3-65532d7a79a4" (UID: "4a6214c5-1554-43a3-82d3-65532d7a79a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.139327 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6214c5-1554-43a3-82d3-65532d7a79a4-kube-api-access-p25d2" (OuterVolumeSpecName: "kube-api-access-p25d2") pod "4a6214c5-1554-43a3-82d3-65532d7a79a4" (UID: "4a6214c5-1554-43a3-82d3-65532d7a79a4"). InnerVolumeSpecName "kube-api-access-p25d2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.139422 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" (UID: "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.139505 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a6214c5-1554-43a3-82d3-65532d7a79a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a6214c5-1554-43a3-82d3-65532d7a79a4" (UID: "4a6214c5-1554-43a3-82d3-65532d7a79a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.141520 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-kube-api-access-w6ptb" (OuterVolumeSpecName: "kube-api-access-w6ptb") pod "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" (UID: "3cc267de-0ae3-4e2d-b4b1-c4268e662ca2"). InnerVolumeSpecName "kube-api-access-w6ptb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.235699 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p25d2\" (UniqueName: \"kubernetes.io/projected/4a6214c5-1554-43a3-82d3-65532d7a79a4-kube-api-access-p25d2\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.235766 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a6214c5-1554-43a3-82d3-65532d7a79a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.235784 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.235800 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6214c5-1554-43a3-82d3-65532d7a79a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:12 crc kubenswrapper[5116]: I1209 14:19:12.235816 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6ptb\" (UniqueName: \"kubernetes.io/projected/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2-kube-api-access-w6ptb\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.169975 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" event={"ID":"3cc267de-0ae3-4e2d-b4b1-c4268e662ca2","Type":"ContainerDied","Data":"28e2abb7c3dba159b3c7c3d7ba62a2bbc58d67130c07adf80419b81e2ab4a696"} Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.170133 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.170394 5116 scope.go:117] "RemoveContainer" containerID="5383d2819e812ea3bf8e733f949d000d14f334b84b7920e805d6f22b8ee73488" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.170163 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.170166 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.170355 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.171016 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g2vff" event={"ID":"4a6214c5-1554-43a3-82d3-65532d7a79a4","Type":"ContainerDied","Data":"d454243546f4f1c8603d31a38c01490013f1b7a5d1fd57116bc78c63eae8ee20"} Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.171038 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b68bdb594-x56bq"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.171627 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a6214c5-1554-43a3-82d3-65532d7a79a4" containerName="controller-manager" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.171825 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6214c5-1554-43a3-82d3-65532d7a79a4" containerName="controller-manager" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.172241 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="4a6214c5-1554-43a3-82d3-65532d7a79a4" containerName="controller-manager" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.197449 5116 scope.go:117] "RemoveContainer" containerID="309129def5e3a28bc2f0c34173fbf5e6d30f1829ce763594e60dc7527651488b" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.247322 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/08d460a7-0b96-4150-94b7-aafc8e88ef29-tmp\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.247414 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d460a7-0b96-4150-94b7-aafc8e88ef29-config\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.247442 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08d460a7-0b96-4150-94b7-aafc8e88ef29-client-ca\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.247598 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpbg\" (UniqueName: \"kubernetes.io/projected/08d460a7-0b96-4150-94b7-aafc8e88ef29-kube-api-access-gwpbg\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.247711 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08d460a7-0b96-4150-94b7-aafc8e88ef29-serving-cert\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.302729 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b68bdb594-x56bq"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.303141 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.303254 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-r7lsl"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.303373 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g2vff"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.302892 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.303459 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g2vff"] Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.305550 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.305835 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.306107 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.306326 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.307623 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.307640 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.318484 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.349464 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwpbg\" (UniqueName: \"kubernetes.io/projected/08d460a7-0b96-4150-94b7-aafc8e88ef29-kube-api-access-gwpbg\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.349543 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08d460a7-0b96-4150-94b7-aafc8e88ef29-serving-cert\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.349636 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/08d460a7-0b96-4150-94b7-aafc8e88ef29-tmp\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.349717 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d460a7-0b96-4150-94b7-aafc8e88ef29-config\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.349777 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08d460a7-0b96-4150-94b7-aafc8e88ef29-client-ca\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.350476 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/08d460a7-0b96-4150-94b7-aafc8e88ef29-tmp\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.351260 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08d460a7-0b96-4150-94b7-aafc8e88ef29-client-ca\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.351286 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08d460a7-0b96-4150-94b7-aafc8e88ef29-config\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.360140 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08d460a7-0b96-4150-94b7-aafc8e88ef29-serving-cert\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.372749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwpbg\" (UniqueName: \"kubernetes.io/projected/08d460a7-0b96-4150-94b7-aafc8e88ef29-kube-api-access-gwpbg\") pod \"route-controller-manager-86c494f46d-hfjdf\" (UID: \"08d460a7-0b96-4150-94b7-aafc8e88ef29\") " pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.451005 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-config\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.451056 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4d3856c1-40ce-422c-a0f7-7341ad75cfde-tmp\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.451177 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-client-ca\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.451237 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-proxy-ca-bundles\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.451391 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3856c1-40ce-422c-a0f7-7341ad75cfde-serving-cert\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.451426 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87hg8\" (UniqueName: \"kubernetes.io/projected/4d3856c1-40ce-422c-a0f7-7341ad75cfde-kube-api-access-87hg8\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.497991 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.552533 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-config\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.552918 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4d3856c1-40ce-422c-a0f7-7341ad75cfde-tmp\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.553026 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-client-ca\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.553068 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-proxy-ca-bundles\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.553201 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3856c1-40ce-422c-a0f7-7341ad75cfde-serving-cert\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.553393 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87hg8\" (UniqueName: \"kubernetes.io/projected/4d3856c1-40ce-422c-a0f7-7341ad75cfde-kube-api-access-87hg8\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.553540 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4d3856c1-40ce-422c-a0f7-7341ad75cfde-tmp\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.554536 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-client-ca\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.554650 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-proxy-ca-bundles\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.554735 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3856c1-40ce-422c-a0f7-7341ad75cfde-config\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.557660 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3856c1-40ce-422c-a0f7-7341ad75cfde-serving-cert\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.574442 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87hg8\" (UniqueName: \"kubernetes.io/projected/4d3856c1-40ce-422c-a0f7-7341ad75cfde-kube-api-access-87hg8\") pod \"controller-manager-7b68bdb594-x56bq\" (UID: \"4d3856c1-40ce-422c-a0f7-7341ad75cfde\") " pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.620390 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.757977 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc267de-0ae3-4e2d-b4b1-c4268e662ca2" path="/var/lib/kubelet/pods/3cc267de-0ae3-4e2d-b4b1-c4268e662ca2/volumes" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.758807 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a6214c5-1554-43a3-82d3-65532d7a79a4" path="/var/lib/kubelet/pods/4a6214c5-1554-43a3-82d3-65532d7a79a4/volumes" Dec 09 14:19:13 crc kubenswrapper[5116]: I1209 14:19:13.882563 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf"] Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.002027 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b68bdb594-x56bq"] Dec 09 14:19:14 crc kubenswrapper[5116]: W1209 14:19:14.006563 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d3856c1_40ce_422c_a0f7_7341ad75cfde.slice/crio-aaa3f158424092dc48a16e32fc42e2728f6cf7804de441f0f545213b98a34888 WatchSource:0}: Error finding container aaa3f158424092dc48a16e32fc42e2728f6cf7804de441f0f545213b98a34888: Status 404 returned error can't find the container with id aaa3f158424092dc48a16e32fc42e2728f6cf7804de441f0f545213b98a34888 Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.848626 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" event={"ID":"4d3856c1-40ce-422c-a0f7-7341ad75cfde","Type":"ContainerStarted","Data":"85e948fc2a7b2f3c7bf81174ae6259a8cdab3a10c824eb2dd1120657df2853fe"} Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.850119 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.850264 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" event={"ID":"4d3856c1-40ce-422c-a0f7-7341ad75cfde","Type":"ContainerStarted","Data":"aaa3f158424092dc48a16e32fc42e2728f6cf7804de441f0f545213b98a34888"} Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.851211 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" event={"ID":"08d460a7-0b96-4150-94b7-aafc8e88ef29","Type":"ContainerStarted","Data":"43285e155b2e9b0ec42facfac271f22876ec2ebecc7be4307f7064eba4525aa4"} Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.851268 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" event={"ID":"08d460a7-0b96-4150-94b7-aafc8e88ef29","Type":"ContainerStarted","Data":"60b73b5a9a5c1021df4e0ba02a0b847b35f29e98f0c3624a6d5aec3b888d2f0c"} Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.851383 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.858469 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.881129 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" podStartSLOduration=3.880946161 podStartE2EDuration="3.880946161s" podCreationTimestamp="2025-12-09 14:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:19:14.869496904 +0000 UTC m=+293.391241742" watchObservedRunningTime="2025-12-09 14:19:14.880946161 +0000 UTC m=+293.402690999" Dec 09 14:19:14 crc kubenswrapper[5116]: I1209 14:19:14.905535 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c494f46d-hfjdf" podStartSLOduration=3.90551683 podStartE2EDuration="3.90551683s" podCreationTimestamp="2025-12-09 14:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:19:14.901929424 +0000 UTC m=+293.423674232" watchObservedRunningTime="2025-12-09 14:19:14.90551683 +0000 UTC m=+293.427261638" Dec 09 14:19:15 crc kubenswrapper[5116]: I1209 14:19:15.395863 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b68bdb594-x56bq" Dec 09 14:19:21 crc kubenswrapper[5116]: I1209 14:19:21.855600 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 14:19:21 crc kubenswrapper[5116]: I1209 14:19:21.861602 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 14:19:21 crc kubenswrapper[5116]: I1209 14:19:21.922613 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:19:21 crc kubenswrapper[5116]: I1209 14:19:21.927890 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.168113 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.168207 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.168273 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.169077 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc"} pod="openshift-machine-config-operator/machine-config-daemon-phdhk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.169182 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" containerID="cri-o://5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc" gracePeriod=600 Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.879082 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.916264 5116 generic.go:358] "Generic (PLEG): container finished" podID="140ab739-f0e3-4429-8e23-03782755777d" containerID="5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc" exitCode=0 Dec 09 14:19:22 crc kubenswrapper[5116]: I1209 14:19:22.916454 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerDied","Data":"5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc"} Dec 09 14:19:23 crc kubenswrapper[5116]: I1209 14:19:23.925253 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"17f6fa581e6c8bab2a2df410743179402ba025d782243947d0021133fa9c7873"} Dec 09 14:19:36 crc kubenswrapper[5116]: I1209 14:19:36.869862 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" podUID="908467fd-ce00-441d-a504-dce785c290f2" containerName="oauth-openshift" containerID="cri-o://9e0910e76765f7d21de8f4d19dfb289fdb61bfede2a49fe925a8df441cc2a2db" gracePeriod=15 Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.001607 5116 generic.go:358] "Generic (PLEG): container finished" podID="908467fd-ce00-441d-a504-dce785c290f2" containerID="9e0910e76765f7d21de8f4d19dfb289fdb61bfede2a49fe925a8df441cc2a2db" exitCode=0 Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.001863 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" event={"ID":"908467fd-ce00-441d-a504-dce785c290f2","Type":"ContainerDied","Data":"9e0910e76765f7d21de8f4d19dfb289fdb61bfede2a49fe925a8df441cc2a2db"} Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.380491 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.411582 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6766bbf5db-96s7f"] Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.412493 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="908467fd-ce00-441d-a504-dce785c290f2" containerName="oauth-openshift" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.412589 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="908467fd-ce00-441d-a504-dce785c290f2" containerName="oauth-openshift" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.412762 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="908467fd-ce00-441d-a504-dce785c290f2" containerName="oauth-openshift" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.422061 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.442676 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6766bbf5db-96s7f"] Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.463883 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-idp-0-file-data\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.464249 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/908467fd-ce00-441d-a504-dce785c290f2-audit-dir\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.464391 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-audit-policies\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.464527 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-service-ca\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.464703 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-cliconfig\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.465529 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-router-certs\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.466113 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-error\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.466632 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-trusted-ca-bundle\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.466762 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-provider-selection\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.464389 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/908467fd-ce00-441d-a504-dce785c290f2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.465258 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.465344 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.465929 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.467141 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.467469 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-ocp-branding-template\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.467598 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-login\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.467741 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-serving-cert\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.467860 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spfpb\" (UniqueName: \"kubernetes.io/projected/908467fd-ce00-441d-a504-dce785c290f2-kube-api-access-spfpb\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.467997 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-session\") pod \"908467fd-ce00-441d-a504-dce785c290f2\" (UID: \"908467fd-ce00-441d-a504-dce785c290f2\") " Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.468524 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.468648 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.468742 5116 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/908467fd-ce00-441d-a504-dce785c290f2-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.468827 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.468902 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.471576 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.471907 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.472597 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.472871 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.473785 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.474168 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.476234 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.476645 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.476909 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908467fd-ce00-441d-a504-dce785c290f2-kube-api-access-spfpb" (OuterVolumeSpecName: "kube-api-access-spfpb") pod "908467fd-ce00-441d-a504-dce785c290f2" (UID: "908467fd-ce00-441d-a504-dce785c290f2"). InnerVolumeSpecName "kube-api-access-spfpb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570422 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-login\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570504 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570546 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-session\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570579 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570725 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06b01de3-5778-4b3e-87fd-4073173f1fd3-audit-dir\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570812 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-router-certs\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.570887 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-error\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571038 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-service-ca\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571092 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzwt\" (UniqueName: \"kubernetes.io/projected/06b01de3-5778-4b3e-87fd-4073173f1fd3-kube-api-access-2mzwt\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571152 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571204 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571262 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571324 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-audit-policies\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571367 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571466 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571496 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571516 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571532 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571545 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571558 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-spfpb\" (UniqueName: \"kubernetes.io/projected/908467fd-ce00-441d-a504-dce785c290f2-kube-api-access-spfpb\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571569 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571582 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.571596 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/908467fd-ce00-441d-a504-dce785c290f2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673046 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673099 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673119 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673145 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-audit-policies\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673165 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673188 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-login\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673240 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673264 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-session\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673278 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673327 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06b01de3-5778-4b3e-87fd-4073173f1fd3-audit-dir\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673349 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-router-certs\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673418 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-error\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673542 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-service-ca\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.673564 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mzwt\" (UniqueName: \"kubernetes.io/projected/06b01de3-5778-4b3e-87fd-4073173f1fd3-kube-api-access-2mzwt\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.674108 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.674199 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06b01de3-5778-4b3e-87fd-4073173f1fd3-audit-dir\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.675158 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-audit-policies\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.675774 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-service-ca\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.676862 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.679625 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-router-certs\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.681461 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-error\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.681696 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-session\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.682802 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-login\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.683067 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.683431 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.683831 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.685312 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06b01de3-5778-4b3e-87fd-4073173f1fd3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.694503 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mzwt\" (UniqueName: \"kubernetes.io/projected/06b01de3-5778-4b3e-87fd-4073173f1fd3-kube-api-access-2mzwt\") pod \"oauth-openshift-6766bbf5db-96s7f\" (UID: \"06b01de3-5778-4b3e-87fd-4073173f1fd3\") " pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:37 crc kubenswrapper[5116]: I1209 14:19:37.738516 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:38 crc kubenswrapper[5116]: I1209 14:19:38.014745 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" event={"ID":"908467fd-ce00-441d-a504-dce785c290f2","Type":"ContainerDied","Data":"1ec72e7dee55654f81943618759c94598e6c986035cc0328e68ad7855677a409"} Dec 09 14:19:38 crc kubenswrapper[5116]: I1209 14:19:38.014815 5116 scope.go:117] "RemoveContainer" containerID="9e0910e76765f7d21de8f4d19dfb289fdb61bfede2a49fe925a8df441cc2a2db" Dec 09 14:19:38 crc kubenswrapper[5116]: I1209 14:19:38.014892 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-8k9f4" Dec 09 14:19:38 crc kubenswrapper[5116]: I1209 14:19:38.037616 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-8k9f4"] Dec 09 14:19:38 crc kubenswrapper[5116]: I1209 14:19:38.041945 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-8k9f4"] Dec 09 14:19:38 crc kubenswrapper[5116]: I1209 14:19:38.182927 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6766bbf5db-96s7f"] Dec 09 14:19:39 crc kubenswrapper[5116]: I1209 14:19:39.024402 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" event={"ID":"06b01de3-5778-4b3e-87fd-4073173f1fd3","Type":"ContainerStarted","Data":"6267d6172955a9a1772dabfd507a20bff70e9a917ac9e6c869e3c2af885e20f2"} Dec 09 14:19:39 crc kubenswrapper[5116]: I1209 14:19:39.024479 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" event={"ID":"06b01de3-5778-4b3e-87fd-4073173f1fd3","Type":"ContainerStarted","Data":"2da481e2e0c438994d0fa457f5739c1e8910a534e90a95bd4337c5f499d88ba7"} Dec 09 14:19:39 crc kubenswrapper[5116]: I1209 14:19:39.024510 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:39 crc kubenswrapper[5116]: I1209 14:19:39.035143 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" Dec 09 14:19:39 crc kubenswrapper[5116]: I1209 14:19:39.046456 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6766bbf5db-96s7f" podStartSLOduration=28.046433034 podStartE2EDuration="28.046433034s" podCreationTimestamp="2025-12-09 14:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:19:39.043324841 +0000 UTC m=+317.565069639" watchObservedRunningTime="2025-12-09 14:19:39.046433034 +0000 UTC m=+317.568177862" Dec 09 14:19:39 crc kubenswrapper[5116]: I1209 14:19:39.756164 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908467fd-ce00-441d-a504-dce785c290f2" path="/var/lib/kubelet/pods/908467fd-ce00-441d-a504-dce785c290f2/volumes" Dec 09 14:19:55 crc kubenswrapper[5116]: I1209 14:19:55.267840 5116 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.314999 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4ppp5"] Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.316554 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4ppp5" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="registry-server" containerID="cri-o://af5b75e03b2a952db073b2c463e8db4e520442c192c46ca4345cca18d1256701" gracePeriod=30 Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.326600 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4s642"] Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.327009 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4s642" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="registry-server" containerID="cri-o://d9be77cb6c7b76fb77043c36fe092560241b2512a9cd608239b4bfd26724e006" gracePeriod=30 Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.338067 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k9645"] Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.338365 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" containerID="cri-o://3e4de743939b2a7e05ae56ebb593688fa542c53cbfecbb6dc0d476c2a4b1fb1a" gracePeriod=30 Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.353087 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6wc9"] Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.353428 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6wc9" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="registry-server" containerID="cri-o://8e25dc8b8bdea841b2de09c095a18270b71011a7a8f78394408c22fbebc5b31e" gracePeriod=30 Dec 09 14:20:05 crc kubenswrapper[5116]: I1209 14:20:05.373060 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pfc7k"] Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.128038 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.137230 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tzzc"] Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.137269 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pfc7k"] Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.138098 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7tzzc" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="registry-server" containerID="cri-o://b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" gracePeriod=30 Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.170902 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5557d473-0c1f-4131-83f9-4b52552d22d4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.171560 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mks6c\" (UniqueName: \"kubernetes.io/projected/5557d473-0c1f-4131-83f9-4b52552d22d4-kube-api-access-mks6c\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.171864 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5557d473-0c1f-4131-83f9-4b52552d22d4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.171894 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5557d473-0c1f-4131-83f9-4b52552d22d4-tmp\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.202269 5116 generic.go:358] "Generic (PLEG): container finished" podID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerID="8e25dc8b8bdea841b2de09c095a18270b71011a7a8f78394408c22fbebc5b31e" exitCode=0 Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.202406 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerDied","Data":"8e25dc8b8bdea841b2de09c095a18270b71011a7a8f78394408c22fbebc5b31e"} Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.203845 5116 generic.go:358] "Generic (PLEG): container finished" podID="27215642-7324-4959-8b89-554060ecec24" containerID="af5b75e03b2a952db073b2c463e8db4e520442c192c46ca4345cca18d1256701" exitCode=0 Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.203890 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerDied","Data":"af5b75e03b2a952db073b2c463e8db4e520442c192c46ca4345cca18d1256701"} Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.205210 5116 generic.go:358] "Generic (PLEG): container finished" podID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerID="d9be77cb6c7b76fb77043c36fe092560241b2512a9cd608239b4bfd26724e006" exitCode=0 Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.205279 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerDied","Data":"d9be77cb6c7b76fb77043c36fe092560241b2512a9cd608239b4bfd26724e006"} Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.206375 5116 generic.go:358] "Generic (PLEG): container finished" podID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerID="3e4de743939b2a7e05ae56ebb593688fa542c53cbfecbb6dc0d476c2a4b1fb1a" exitCode=0 Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.206402 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" event={"ID":"42fe2705-f9ee-4e26-8e56-2730ba8f6196","Type":"ContainerDied","Data":"3e4de743939b2a7e05ae56ebb593688fa542c53cbfecbb6dc0d476c2a4b1fb1a"} Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.206422 5116 scope.go:117] "RemoveContainer" containerID="5070b77cfb97d30c9e7ad08cd695357d30e39f054753020b5a144725d6608a53" Dec 09 14:20:06 crc kubenswrapper[5116]: E1209 14:20:06.210814 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00 is running failed: container process not found" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:20:06 crc kubenswrapper[5116]: E1209 14:20:06.211334 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00 is running failed: container process not found" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:20:06 crc kubenswrapper[5116]: E1209 14:20:06.211663 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00 is running failed: container process not found" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:20:06 crc kubenswrapper[5116]: E1209 14:20:06.211703 5116 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-7tzzc" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="registry-server" probeResult="unknown" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.273406 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mks6c\" (UniqueName: \"kubernetes.io/projected/5557d473-0c1f-4131-83f9-4b52552d22d4-kube-api-access-mks6c\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.273461 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5557d473-0c1f-4131-83f9-4b52552d22d4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.273495 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5557d473-0c1f-4131-83f9-4b52552d22d4-tmp\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.273572 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5557d473-0c1f-4131-83f9-4b52552d22d4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.274415 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5557d473-0c1f-4131-83f9-4b52552d22d4-tmp\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.275229 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5557d473-0c1f-4131-83f9-4b52552d22d4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.280949 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5557d473-0c1f-4131-83f9-4b52552d22d4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.299046 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mks6c\" (UniqueName: \"kubernetes.io/projected/5557d473-0c1f-4131-83f9-4b52552d22d4-kube-api-access-mks6c\") pod \"marketplace-operator-547dbd544d-pfc7k\" (UID: \"5557d473-0c1f-4131-83f9-4b52552d22d4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.526674 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.530708 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.578565 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-catalog-content\") pod \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.578808 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljbx\" (UniqueName: \"kubernetes.io/projected/25eb2cca-64e7-416b-9247-5548bf7a0eb4-kube-api-access-wljbx\") pod \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.578864 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-utilities\") pod \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\" (UID: \"25eb2cca-64e7-416b-9247-5548bf7a0eb4\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.580084 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-utilities" (OuterVolumeSpecName: "utilities") pod "25eb2cca-64e7-416b-9247-5548bf7a0eb4" (UID: "25eb2cca-64e7-416b-9247-5548bf7a0eb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.585923 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25eb2cca-64e7-416b-9247-5548bf7a0eb4-kube-api-access-wljbx" (OuterVolumeSpecName: "kube-api-access-wljbx") pod "25eb2cca-64e7-416b-9247-5548bf7a0eb4" (UID: "25eb2cca-64e7-416b-9247-5548bf7a0eb4"). InnerVolumeSpecName "kube-api-access-wljbx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.680802 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wljbx\" (UniqueName: \"kubernetes.io/projected/25eb2cca-64e7-416b-9247-5548bf7a0eb4-kube-api-access-wljbx\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.680833 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.681730 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25eb2cca-64e7-416b-9247-5548bf7a0eb4" (UID: "25eb2cca-64e7-416b-9247-5548bf7a0eb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.782872 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25eb2cca-64e7-416b-9247-5548bf7a0eb4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.848004 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.886420 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-utilities\") pod \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.886618 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-catalog-content\") pod \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.886649 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssdv8\" (UniqueName: \"kubernetes.io/projected/1b43fdb9-c388-42c6-90d8-1fb5de88023a-kube-api-access-ssdv8\") pod \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\" (UID: \"1b43fdb9-c388-42c6-90d8-1fb5de88023a\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.887548 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-utilities" (OuterVolumeSpecName: "utilities") pod "1b43fdb9-c388-42c6-90d8-1fb5de88023a" (UID: "1b43fdb9-c388-42c6-90d8-1fb5de88023a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.894110 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b43fdb9-c388-42c6-90d8-1fb5de88023a-kube-api-access-ssdv8" (OuterVolumeSpecName: "kube-api-access-ssdv8") pod "1b43fdb9-c388-42c6-90d8-1fb5de88023a" (UID: "1b43fdb9-c388-42c6-90d8-1fb5de88023a"). InnerVolumeSpecName "kube-api-access-ssdv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.925102 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.957297 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b43fdb9-c388-42c6-90d8-1fb5de88023a" (UID: "1b43fdb9-c388-42c6-90d8-1fb5de88023a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.967170 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.988211 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989232 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-operator-metrics\") pod \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989291 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbtcg\" (UniqueName: \"kubernetes.io/projected/42fe2705-f9ee-4e26-8e56-2730ba8f6196-kube-api-access-wbtcg\") pod \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989388 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fe2705-f9ee-4e26-8e56-2730ba8f6196-tmp\") pod \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989423 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-trusted-ca\") pod \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\" (UID: \"42fe2705-f9ee-4e26-8e56-2730ba8f6196\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989443 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-utilities\") pod \"57ce2822-5420-457b-b3dc-1314fccf7d63\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989477 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv8vx\" (UniqueName: \"kubernetes.io/projected/57ce2822-5420-457b-b3dc-1314fccf7d63-kube-api-access-fv8vx\") pod \"57ce2822-5420-457b-b3dc-1314fccf7d63\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989532 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-catalog-content\") pod \"57ce2822-5420-457b-b3dc-1314fccf7d63\" (UID: \"57ce2822-5420-457b-b3dc-1314fccf7d63\") " Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989704 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989719 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssdv8\" (UniqueName: \"kubernetes.io/projected/1b43fdb9-c388-42c6-90d8-1fb5de88023a-kube-api-access-ssdv8\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.989729 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43fdb9-c388-42c6-90d8-1fb5de88023a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.993400 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42fe2705-f9ee-4e26-8e56-2730ba8f6196-tmp" (OuterVolumeSpecName: "tmp") pod "42fe2705-f9ee-4e26-8e56-2730ba8f6196" (UID: "42fe2705-f9ee-4e26-8e56-2730ba8f6196"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.995771 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-utilities" (OuterVolumeSpecName: "utilities") pod "57ce2822-5420-457b-b3dc-1314fccf7d63" (UID: "57ce2822-5420-457b-b3dc-1314fccf7d63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.996174 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "42fe2705-f9ee-4e26-8e56-2730ba8f6196" (UID: "42fe2705-f9ee-4e26-8e56-2730ba8f6196"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.998854 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ce2822-5420-457b-b3dc-1314fccf7d63-kube-api-access-fv8vx" (OuterVolumeSpecName: "kube-api-access-fv8vx") pod "57ce2822-5420-457b-b3dc-1314fccf7d63" (UID: "57ce2822-5420-457b-b3dc-1314fccf7d63"). InnerVolumeSpecName "kube-api-access-fv8vx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:20:06 crc kubenswrapper[5116]: I1209 14:20:06.999079 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "42fe2705-f9ee-4e26-8e56-2730ba8f6196" (UID: "42fe2705-f9ee-4e26-8e56-2730ba8f6196"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.002641 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fe2705-f9ee-4e26-8e56-2730ba8f6196-kube-api-access-wbtcg" (OuterVolumeSpecName: "kube-api-access-wbtcg") pod "42fe2705-f9ee-4e26-8e56-2730ba8f6196" (UID: "42fe2705-f9ee-4e26-8e56-2730ba8f6196"). InnerVolumeSpecName "kube-api-access-wbtcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.019200 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57ce2822-5420-457b-b3dc-1314fccf7d63" (UID: "57ce2822-5420-457b-b3dc-1314fccf7d63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090548 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-catalog-content\") pod \"27215642-7324-4959-8b89-554060ecec24\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090630 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xzc2\" (UniqueName: \"kubernetes.io/projected/27215642-7324-4959-8b89-554060ecec24-kube-api-access-4xzc2\") pod \"27215642-7324-4959-8b89-554060ecec24\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090692 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-utilities\") pod \"27215642-7324-4959-8b89-554060ecec24\" (UID: \"27215642-7324-4959-8b89-554060ecec24\") " Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090834 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fe2705-f9ee-4e26-8e56-2730ba8f6196-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090845 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090857 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090866 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fv8vx\" (UniqueName: \"kubernetes.io/projected/57ce2822-5420-457b-b3dc-1314fccf7d63-kube-api-access-fv8vx\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090874 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57ce2822-5420-457b-b3dc-1314fccf7d63-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090883 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42fe2705-f9ee-4e26-8e56-2730ba8f6196-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.090891 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbtcg\" (UniqueName: \"kubernetes.io/projected/42fe2705-f9ee-4e26-8e56-2730ba8f6196-kube-api-access-wbtcg\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.091816 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-utilities" (OuterVolumeSpecName: "utilities") pod "27215642-7324-4959-8b89-554060ecec24" (UID: "27215642-7324-4959-8b89-554060ecec24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.093720 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27215642-7324-4959-8b89-554060ecec24-kube-api-access-4xzc2" (OuterVolumeSpecName: "kube-api-access-4xzc2") pod "27215642-7324-4959-8b89-554060ecec24" (UID: "27215642-7324-4959-8b89-554060ecec24"). InnerVolumeSpecName "kube-api-access-4xzc2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.118343 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pfc7k"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.124463 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27215642-7324-4959-8b89-554060ecec24" (UID: "27215642-7324-4959-8b89-554060ecec24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.192622 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.192669 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27215642-7324-4959-8b89-554060ecec24-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.192684 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4xzc2\" (UniqueName: \"kubernetes.io/projected/27215642-7324-4959-8b89-554060ecec24-kube-api-access-4xzc2\") on node \"crc\" DevicePath \"\"" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.214099 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6wc9" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.214346 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6wc9" event={"ID":"57ce2822-5420-457b-b3dc-1314fccf7d63","Type":"ContainerDied","Data":"f988029123d39fffce9bb13d7e23741621f4fc7c5ec3d93dee6f757f76e3c0ca"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.214394 5116 scope.go:117] "RemoveContainer" containerID="8e25dc8b8bdea841b2de09c095a18270b71011a7a8f78394408c22fbebc5b31e" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.216378 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" event={"ID":"5557d473-0c1f-4131-83f9-4b52552d22d4","Type":"ContainerStarted","Data":"66b6c8c3abc879849bba3e196e4bf41115e8b49defc06010386e84beb7cadc54"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.222902 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ppp5" event={"ID":"27215642-7324-4959-8b89-554060ecec24","Type":"ContainerDied","Data":"652b8c149edd63eca74851a986956f92518e98183a36b1a21e884f8fb19f194b"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.222928 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ppp5" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.226744 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4s642" event={"ID":"1b43fdb9-c388-42c6-90d8-1fb5de88023a","Type":"ContainerDied","Data":"4ce9fd65ee6a6df8ba2c0547966671f66e242e6e786843e6b6373b2a0dee34f4"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.226873 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4s642" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.229745 5116 generic.go:358] "Generic (PLEG): container finished" podID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" exitCode=0 Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.229841 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerDied","Data":"b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.229867 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tzzc" event={"ID":"25eb2cca-64e7-416b-9247-5548bf7a0eb4","Type":"ContainerDied","Data":"9ebad28baf20338d188dfa28bfcdd6c122ecfc3402acebb51ac8a41c162b6a47"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.229946 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tzzc" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.232009 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" event={"ID":"42fe2705-f9ee-4e26-8e56-2730ba8f6196","Type":"ContainerDied","Data":"02ed87818af791432e1275ba9686304ac82f9e09666c5f2a431215b390376b1d"} Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.232092 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k9645" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.237102 5116 scope.go:117] "RemoveContainer" containerID="31ced6558c2265f64244783a233c27ff50754758a3cb2f94c0954e0eef793eb9" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.278672 5116 scope.go:117] "RemoveContainer" containerID="a04fbf82845d8ad2cf74366ce5ded8edaa1d1901dcbce5ff1f439a4fe5569651" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.283738 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4ppp5"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.286684 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4ppp5"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.300165 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tzzc"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.301011 5116 scope.go:117] "RemoveContainer" containerID="af5b75e03b2a952db073b2c463e8db4e520442c192c46ca4345cca18d1256701" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.303746 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7tzzc"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.323012 5116 scope.go:117] "RemoveContainer" containerID="2c44ffb6f54ee7d102327bf12475d142a4999298c180cecf6dbb14b924f18a47" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.329693 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4s642"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.341922 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4s642"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.345564 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6wc9"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.346294 5116 scope.go:117] "RemoveContainer" containerID="db63082743425e66779649f3667cb5e65e15fda9d60b198df880fd8d79a9782d" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.348717 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6wc9"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.351690 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k9645"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.354975 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k9645"] Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.360911 5116 scope.go:117] "RemoveContainer" containerID="d9be77cb6c7b76fb77043c36fe092560241b2512a9cd608239b4bfd26724e006" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.375479 5116 scope.go:117] "RemoveContainer" containerID="2a89185b5ffa5373730d6077d30be3063f916a30e5444a4454be9b4d95c713af" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.393057 5116 scope.go:117] "RemoveContainer" containerID="2db21f732747fd25af8f55e3d4eee787fcb49eb1ac34c5b3da3852889a2e7480" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.405451 5116 scope.go:117] "RemoveContainer" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.418878 5116 scope.go:117] "RemoveContainer" containerID="9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.435463 5116 scope.go:117] "RemoveContainer" containerID="b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.453990 5116 scope.go:117] "RemoveContainer" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" Dec 09 14:20:07 crc kubenswrapper[5116]: E1209 14:20:07.454579 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00\": container with ID starting with b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00 not found: ID does not exist" containerID="b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.454611 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00"} err="failed to get container status \"b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00\": rpc error: code = NotFound desc = could not find container \"b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00\": container with ID starting with b225e805e96f688d4a017e8f04b6496dbde802d17ad39e4d014e0aaf186a0c00 not found: ID does not exist" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.454634 5116 scope.go:117] "RemoveContainer" containerID="9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8" Dec 09 14:20:07 crc kubenswrapper[5116]: E1209 14:20:07.454817 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8\": container with ID starting with 9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8 not found: ID does not exist" containerID="9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.454838 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8"} err="failed to get container status \"9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8\": rpc error: code = NotFound desc = could not find container \"9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8\": container with ID starting with 9956b854004cf034e0d20cafdc0cafbf43fcc0385f7d2a11cf4fa8e71371c3d8 not found: ID does not exist" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.454849 5116 scope.go:117] "RemoveContainer" containerID="b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34" Dec 09 14:20:07 crc kubenswrapper[5116]: E1209 14:20:07.455317 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34\": container with ID starting with b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34 not found: ID does not exist" containerID="b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.455338 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34"} err="failed to get container status \"b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34\": rpc error: code = NotFound desc = could not find container \"b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34\": container with ID starting with b9bf7d82f7ccc5756d246420da2555dab858e16d3f09eb7473806e2979083b34 not found: ID does not exist" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.455352 5116 scope.go:117] "RemoveContainer" containerID="3e4de743939b2a7e05ae56ebb593688fa542c53cbfecbb6dc0d476c2a4b1fb1a" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.756907 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" path="/var/lib/kubelet/pods/1b43fdb9-c388-42c6-90d8-1fb5de88023a/volumes" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.757747 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" path="/var/lib/kubelet/pods/25eb2cca-64e7-416b-9247-5548bf7a0eb4/volumes" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.758304 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27215642-7324-4959-8b89-554060ecec24" path="/var/lib/kubelet/pods/27215642-7324-4959-8b89-554060ecec24/volumes" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.758978 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" path="/var/lib/kubelet/pods/42fe2705-f9ee-4e26-8e56-2730ba8f6196/volumes" Dec 09 14:20:07 crc kubenswrapper[5116]: I1209 14:20:07.759456 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" path="/var/lib/kubelet/pods/57ce2822-5420-457b-b3dc-1314fccf7d63/volumes" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.141018 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4w8st"] Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142101 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142142 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142170 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142187 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142210 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142228 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142250 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142265 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142295 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142310 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142345 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142361 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142383 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142398 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142418 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142434 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142454 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142471 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142492 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142507 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="extract-content" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142558 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142573 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142601 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142617 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142644 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142658 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142679 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142694 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="extract-utilities" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.142986 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="1b43fdb9-c388-42c6-90d8-1fb5de88023a" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.143063 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="25eb2cca-64e7-416b-9247-5548bf7a0eb4" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.143085 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.143114 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="57ce2822-5420-457b-b3dc-1314fccf7d63" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.143141 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="27215642-7324-4959-8b89-554060ecec24" containerName="registry-server" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.143159 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="42fe2705-f9ee-4e26-8e56-2730ba8f6196" containerName="marketplace-operator" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.201995 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w8st"] Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.202059 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.207590 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.241861 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" event={"ID":"5557d473-0c1f-4131-83f9-4b52552d22d4","Type":"ContainerStarted","Data":"be1d8a8c6fafc0f63e1be965654ecfe9cdc68f204789181b084c6b845980d03e"} Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.242092 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.244732 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.260361 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-pfc7k" podStartSLOduration=3.2603379869999998 podStartE2EDuration="3.260337987s" podCreationTimestamp="2025-12-09 14:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:20:08.256658759 +0000 UTC m=+346.778403557" watchObservedRunningTime="2025-12-09 14:20:08.260337987 +0000 UTC m=+346.782082795" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.310806 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kc8d\" (UniqueName: \"kubernetes.io/projected/089c6387-062c-492a-926d-6a8793fe453b-kube-api-access-4kc8d\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.310926 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-catalog-content\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.310945 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-utilities\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.412170 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kc8d\" (UniqueName: \"kubernetes.io/projected/089c6387-062c-492a-926d-6a8793fe453b-kube-api-access-4kc8d\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.412295 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-catalog-content\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.412326 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-utilities\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.412849 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-catalog-content\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.413015 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-utilities\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.431492 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kc8d\" (UniqueName: \"kubernetes.io/projected/089c6387-062c-492a-926d-6a8793fe453b-kube-api-access-4kc8d\") pod \"redhat-marketplace-4w8st\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.517604 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:08 crc kubenswrapper[5116]: I1209 14:20:08.915516 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w8st"] Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.126272 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j4dr"] Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.134362 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.138756 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.142820 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j4dr"] Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.222387 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/05fa0f6e-4c27-4496-b042-ce929c774683-kube-api-access-vlzjm\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.222526 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fa0f6e-4c27-4496-b042-ce929c774683-catalog-content\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.222645 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fa0f6e-4c27-4496-b042-ce929c774683-utilities\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.250055 5116 generic.go:358] "Generic (PLEG): container finished" podID="089c6387-062c-492a-926d-6a8793fe453b" containerID="9607a22f424742c7aac94f4d1c27600ec7bceef87e5f5450fbc9af7a07fae392" exitCode=0 Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.250140 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w8st" event={"ID":"089c6387-062c-492a-926d-6a8793fe453b","Type":"ContainerDied","Data":"9607a22f424742c7aac94f4d1c27600ec7bceef87e5f5450fbc9af7a07fae392"} Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.250182 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w8st" event={"ID":"089c6387-062c-492a-926d-6a8793fe453b","Type":"ContainerStarted","Data":"fdec22c9a9650df622e34807fad9ca97f3c7432bfab0687fa4c3149133a7565f"} Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.324167 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/05fa0f6e-4c27-4496-b042-ce929c774683-kube-api-access-vlzjm\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.324274 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fa0f6e-4c27-4496-b042-ce929c774683-catalog-content\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.324740 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fa0f6e-4c27-4496-b042-ce929c774683-utilities\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.324829 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fa0f6e-4c27-4496-b042-ce929c774683-catalog-content\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.325127 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fa0f6e-4c27-4496-b042-ce929c774683-utilities\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.342885 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/05fa0f6e-4c27-4496-b042-ce929c774683-kube-api-access-vlzjm\") pod \"redhat-operators-7j4dr\" (UID: \"05fa0f6e-4c27-4496-b042-ce929c774683\") " pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.460507 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:09 crc kubenswrapper[5116]: I1209 14:20:09.870428 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j4dr"] Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.259227 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j4dr" event={"ID":"05fa0f6e-4c27-4496-b042-ce929c774683","Type":"ContainerStarted","Data":"01ba0edd3c5692234d8332042feadd6aa120e8fecdbc58be3c9f06fa244c47d8"} Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.529483 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bvlwk"] Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.534683 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.537003 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.545763 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvlwk"] Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.642143 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrjs\" (UniqueName: \"kubernetes.io/projected/3ff9c2b9-d709-4e3a-8898-57a40593f835-kube-api-access-pkrjs\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.642181 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff9c2b9-d709-4e3a-8898-57a40593f835-utilities\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.642201 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff9c2b9-d709-4e3a-8898-57a40593f835-catalog-content\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.743105 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrjs\" (UniqueName: \"kubernetes.io/projected/3ff9c2b9-d709-4e3a-8898-57a40593f835-kube-api-access-pkrjs\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.743164 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff9c2b9-d709-4e3a-8898-57a40593f835-utilities\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.743545 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff9c2b9-d709-4e3a-8898-57a40593f835-catalog-content\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.743621 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff9c2b9-d709-4e3a-8898-57a40593f835-utilities\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.743876 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff9c2b9-d709-4e3a-8898-57a40593f835-catalog-content\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.774765 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrjs\" (UniqueName: \"kubernetes.io/projected/3ff9c2b9-d709-4e3a-8898-57a40593f835-kube-api-access-pkrjs\") pod \"community-operators-bvlwk\" (UID: \"3ff9c2b9-d709-4e3a-8898-57a40593f835\") " pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:10 crc kubenswrapper[5116]: I1209 14:20:10.870213 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.270337 5116 generic.go:358] "Generic (PLEG): container finished" podID="089c6387-062c-492a-926d-6a8793fe453b" containerID="0dd51a96f882b5d313575bf448390de35e8b0e20ee3d54f50b166fc8991404da" exitCode=0 Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.270393 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w8st" event={"ID":"089c6387-062c-492a-926d-6a8793fe453b","Type":"ContainerDied","Data":"0dd51a96f882b5d313575bf448390de35e8b0e20ee3d54f50b166fc8991404da"} Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.272737 5116 generic.go:358] "Generic (PLEG): container finished" podID="05fa0f6e-4c27-4496-b042-ce929c774683" containerID="29ee57940720499b805e81c8a3acc697e5fc74070b28fb8cab82a9719009dc97" exitCode=0 Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.272999 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j4dr" event={"ID":"05fa0f6e-4c27-4496-b042-ce929c774683","Type":"ContainerDied","Data":"29ee57940720499b805e81c8a3acc697e5fc74070b28fb8cab82a9719009dc97"} Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.285461 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvlwk"] Dec 09 14:20:11 crc kubenswrapper[5116]: W1209 14:20:11.303102 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ff9c2b9_d709_4e3a_8898_57a40593f835.slice/crio-e245465b21e7ff2ac092c20d0c8c234e0151b1f44045108e2a931d45c5bf03d1 WatchSource:0}: Error finding container e245465b21e7ff2ac092c20d0c8c234e0151b1f44045108e2a931d45c5bf03d1: Status 404 returned error can't find the container with id e245465b21e7ff2ac092c20d0c8c234e0151b1f44045108e2a931d45c5bf03d1 Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.534837 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvx4l"] Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.539306 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.541864 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.547964 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvx4l"] Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.654339 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmst\" (UniqueName: \"kubernetes.io/projected/cfcb49c0-ed67-411a-80ed-e587841d95ec-kube-api-access-gnmst\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.654488 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcb49c0-ed67-411a-80ed-e587841d95ec-catalog-content\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.654610 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcb49c0-ed67-411a-80ed-e587841d95ec-utilities\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.755577 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcb49c0-ed67-411a-80ed-e587841d95ec-utilities\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.757212 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmst\" (UniqueName: \"kubernetes.io/projected/cfcb49c0-ed67-411a-80ed-e587841d95ec-kube-api-access-gnmst\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.757426 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcb49c0-ed67-411a-80ed-e587841d95ec-catalog-content\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.756057 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcb49c0-ed67-411a-80ed-e587841d95ec-utilities\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.757839 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcb49c0-ed67-411a-80ed-e587841d95ec-catalog-content\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.790458 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmst\" (UniqueName: \"kubernetes.io/projected/cfcb49c0-ed67-411a-80ed-e587841d95ec-kube-api-access-gnmst\") pod \"certified-operators-rvx4l\" (UID: \"cfcb49c0-ed67-411a-80ed-e587841d95ec\") " pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:11 crc kubenswrapper[5116]: I1209 14:20:11.854613 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.280584 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvx4l"] Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.287855 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w8st" event={"ID":"089c6387-062c-492a-926d-6a8793fe453b","Type":"ContainerStarted","Data":"cac66464fbe41e33e9729b6410719d821f97e77d7de83c1844d57273760ce01b"} Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.291568 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j4dr" event={"ID":"05fa0f6e-4c27-4496-b042-ce929c774683","Type":"ContainerStarted","Data":"32fa8439ae8ad0e0eabb100f7733ffe4d0531f924399ab6d7b08bdb42d499845"} Dec 09 14:20:12 crc kubenswrapper[5116]: W1209 14:20:12.295087 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfcb49c0_ed67_411a_80ed_e587841d95ec.slice/crio-81b641cc0724738b2df2f80522c8605f352b8ef0dc7b05676844e5c3633db12f WatchSource:0}: Error finding container 81b641cc0724738b2df2f80522c8605f352b8ef0dc7b05676844e5c3633db12f: Status 404 returned error can't find the container with id 81b641cc0724738b2df2f80522c8605f352b8ef0dc7b05676844e5c3633db12f Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.295137 5116 generic.go:358] "Generic (PLEG): container finished" podID="3ff9c2b9-d709-4e3a-8898-57a40593f835" containerID="57a2d748c59a81be43645bcdc5ba6a5b8b677da90acf7451e90bc1020b0e6769" exitCode=0 Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.295227 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvlwk" event={"ID":"3ff9c2b9-d709-4e3a-8898-57a40593f835","Type":"ContainerDied","Data":"57a2d748c59a81be43645bcdc5ba6a5b8b677da90acf7451e90bc1020b0e6769"} Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.295248 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvlwk" event={"ID":"3ff9c2b9-d709-4e3a-8898-57a40593f835","Type":"ContainerStarted","Data":"e245465b21e7ff2ac092c20d0c8c234e0151b1f44045108e2a931d45c5bf03d1"} Dec 09 14:20:12 crc kubenswrapper[5116]: I1209 14:20:12.309612 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4w8st" podStartSLOduration=3.432370083 podStartE2EDuration="4.30959646s" podCreationTimestamp="2025-12-09 14:20:08 +0000 UTC" firstStartedPulling="2025-12-09 14:20:09.251799745 +0000 UTC m=+347.773544543" lastFinishedPulling="2025-12-09 14:20:10.129026122 +0000 UTC m=+348.650770920" observedRunningTime="2025-12-09 14:20:12.306858317 +0000 UTC m=+350.828603115" watchObservedRunningTime="2025-12-09 14:20:12.30959646 +0000 UTC m=+350.831341258" Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.302840 5116 generic.go:358] "Generic (PLEG): container finished" podID="05fa0f6e-4c27-4496-b042-ce929c774683" containerID="32fa8439ae8ad0e0eabb100f7733ffe4d0531f924399ab6d7b08bdb42d499845" exitCode=0 Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.302948 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j4dr" event={"ID":"05fa0f6e-4c27-4496-b042-ce929c774683","Type":"ContainerDied","Data":"32fa8439ae8ad0e0eabb100f7733ffe4d0531f924399ab6d7b08bdb42d499845"} Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.306559 5116 generic.go:358] "Generic (PLEG): container finished" podID="cfcb49c0-ed67-411a-80ed-e587841d95ec" containerID="1eb1c75cfae2d61ee5a040b892f6c7217d845c29b2b17fa357d3dfc473c37008" exitCode=0 Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.306676 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvx4l" event={"ID":"cfcb49c0-ed67-411a-80ed-e587841d95ec","Type":"ContainerDied","Data":"1eb1c75cfae2d61ee5a040b892f6c7217d845c29b2b17fa357d3dfc473c37008"} Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.306738 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvx4l" event={"ID":"cfcb49c0-ed67-411a-80ed-e587841d95ec","Type":"ContainerStarted","Data":"81b641cc0724738b2df2f80522c8605f352b8ef0dc7b05676844e5c3633db12f"} Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.308907 5116 generic.go:358] "Generic (PLEG): container finished" podID="3ff9c2b9-d709-4e3a-8898-57a40593f835" containerID="cc7a5708d5991b6a3939c8c4a9d7807f1c8f971846e1f084112c0f80bb279714" exitCode=0 Dec 09 14:20:13 crc kubenswrapper[5116]: I1209 14:20:13.309070 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvlwk" event={"ID":"3ff9c2b9-d709-4e3a-8898-57a40593f835","Type":"ContainerDied","Data":"cc7a5708d5991b6a3939c8c4a9d7807f1c8f971846e1f084112c0f80bb279714"} Dec 09 14:20:14 crc kubenswrapper[5116]: I1209 14:20:14.316262 5116 generic.go:358] "Generic (PLEG): container finished" podID="cfcb49c0-ed67-411a-80ed-e587841d95ec" containerID="80b4ee4e1720fad28fbce83338727236c7c28633e162739555215707401f5f47" exitCode=0 Dec 09 14:20:14 crc kubenswrapper[5116]: I1209 14:20:14.316361 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvx4l" event={"ID":"cfcb49c0-ed67-411a-80ed-e587841d95ec","Type":"ContainerDied","Data":"80b4ee4e1720fad28fbce83338727236c7c28633e162739555215707401f5f47"} Dec 09 14:20:14 crc kubenswrapper[5116]: I1209 14:20:14.320382 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvlwk" event={"ID":"3ff9c2b9-d709-4e3a-8898-57a40593f835","Type":"ContainerStarted","Data":"4f572b3963949a95301927971d7d5a6211ca416fb0e9627ef903e7d5c6852ce0"} Dec 09 14:20:14 crc kubenswrapper[5116]: I1209 14:20:14.323035 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j4dr" event={"ID":"05fa0f6e-4c27-4496-b042-ce929c774683","Type":"ContainerStarted","Data":"f003b144bbbeb5e2c4f40db224bf21d1c83197c4631334012acaf0ed46eb9dc9"} Dec 09 14:20:14 crc kubenswrapper[5116]: I1209 14:20:14.358349 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bvlwk" podStartSLOduration=3.757146242 podStartE2EDuration="4.358332529s" podCreationTimestamp="2025-12-09 14:20:10 +0000 UTC" firstStartedPulling="2025-12-09 14:20:12.295947707 +0000 UTC m=+350.817692505" lastFinishedPulling="2025-12-09 14:20:12.897134004 +0000 UTC m=+351.418878792" observedRunningTime="2025-12-09 14:20:14.355402051 +0000 UTC m=+352.877146849" watchObservedRunningTime="2025-12-09 14:20:14.358332529 +0000 UTC m=+352.880077327" Dec 09 14:20:14 crc kubenswrapper[5116]: I1209 14:20:14.375729 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j4dr" podStartSLOduration=4.765491216 podStartE2EDuration="5.375710892s" podCreationTimestamp="2025-12-09 14:20:09 +0000 UTC" firstStartedPulling="2025-12-09 14:20:11.274853181 +0000 UTC m=+349.796598019" lastFinishedPulling="2025-12-09 14:20:11.885072907 +0000 UTC m=+350.406817695" observedRunningTime="2025-12-09 14:20:14.374748597 +0000 UTC m=+352.896493395" watchObservedRunningTime="2025-12-09 14:20:14.375710892 +0000 UTC m=+352.897455710" Dec 09 14:20:15 crc kubenswrapper[5116]: I1209 14:20:15.331428 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvx4l" event={"ID":"cfcb49c0-ed67-411a-80ed-e587841d95ec","Type":"ContainerStarted","Data":"9eb142d704e04ab41fcdf8302c253db57435411103a2d0d0e2a55c9f0824e23a"} Dec 09 14:20:15 crc kubenswrapper[5116]: I1209 14:20:15.353287 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvx4l" podStartSLOduration=3.854098769 podStartE2EDuration="4.35326869s" podCreationTimestamp="2025-12-09 14:20:11 +0000 UTC" firstStartedPulling="2025-12-09 14:20:13.307337166 +0000 UTC m=+351.829081964" lastFinishedPulling="2025-12-09 14:20:13.806507087 +0000 UTC m=+352.328251885" observedRunningTime="2025-12-09 14:20:15.348339099 +0000 UTC m=+353.870083897" watchObservedRunningTime="2025-12-09 14:20:15.35326869 +0000 UTC m=+353.875013488" Dec 09 14:20:18 crc kubenswrapper[5116]: I1209 14:20:18.519148 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:18 crc kubenswrapper[5116]: I1209 14:20:18.519777 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:18 crc kubenswrapper[5116]: I1209 14:20:18.575092 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:19 crc kubenswrapper[5116]: I1209 14:20:19.403925 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:20:19 crc kubenswrapper[5116]: I1209 14:20:19.460733 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:19 crc kubenswrapper[5116]: I1209 14:20:19.460773 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:19 crc kubenswrapper[5116]: I1209 14:20:19.506519 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:20 crc kubenswrapper[5116]: I1209 14:20:20.413581 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j4dr" Dec 09 14:20:20 crc kubenswrapper[5116]: I1209 14:20:20.870421 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:20 crc kubenswrapper[5116]: I1209 14:20:20.871032 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:20 crc kubenswrapper[5116]: I1209 14:20:20.915043 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:21 crc kubenswrapper[5116]: I1209 14:20:21.427247 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bvlwk" Dec 09 14:20:21 crc kubenswrapper[5116]: I1209 14:20:21.855066 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:21 crc kubenswrapper[5116]: I1209 14:20:21.855714 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:21 crc kubenswrapper[5116]: I1209 14:20:21.897240 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:20:22 crc kubenswrapper[5116]: I1209 14:20:22.413478 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvx4l" Dec 09 14:21:52 crc kubenswrapper[5116]: I1209 14:21:52.167890 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:21:52 crc kubenswrapper[5116]: I1209 14:21:52.169238 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:22:21 crc kubenswrapper[5116]: I1209 14:22:21.993215 5116 scope.go:117] "RemoveContainer" containerID="f35a6f5fa0ba9a66296c557256cf641e84fe698b2ef2571509f355d4dbff6ed6" Dec 09 14:22:22 crc kubenswrapper[5116]: I1209 14:22:22.166644 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:22:22 crc kubenswrapper[5116]: I1209 14:22:22.166739 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.167447 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.168281 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.168339 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.168935 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17f6fa581e6c8bab2a2df410743179402ba025d782243947d0021133fa9c7873"} pod="openshift-machine-config-operator/machine-config-daemon-phdhk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.169018 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" containerID="cri-o://17f6fa581e6c8bab2a2df410743179402ba025d782243947d0021133fa9c7873" gracePeriod=600 Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.342290 5116 generic.go:358] "Generic (PLEG): container finished" podID="140ab739-f0e3-4429-8e23-03782755777d" containerID="17f6fa581e6c8bab2a2df410743179402ba025d782243947d0021133fa9c7873" exitCode=0 Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.342380 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerDied","Data":"17f6fa581e6c8bab2a2df410743179402ba025d782243947d0021133fa9c7873"} Dec 09 14:22:52 crc kubenswrapper[5116]: I1209 14:22:52.342809 5116 scope.go:117] "RemoveContainer" containerID="5afb3f1234496f0c21362a02e430385f906ab7e11ce4551967623f700fcd8dcc" Dec 09 14:22:53 crc kubenswrapper[5116]: I1209 14:22:53.353865 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"63fa14cef65c6ac709b2413472d850235cc43d843f7e025a50cc9050b6ff3247"} Dec 09 14:24:21 crc kubenswrapper[5116]: I1209 14:24:21.980884 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:24:21 crc kubenswrapper[5116]: I1209 14:24:21.990026 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:24:22 crc kubenswrapper[5116]: I1209 14:24:22.039984 5116 scope.go:117] "RemoveContainer" containerID="e1c5ec2e3d21197bb164739dfcd71263dd2311f3a823a0cfc87f0068dbb2f719" Dec 09 14:24:30 crc kubenswrapper[5116]: I1209 14:24:30.418937 5116 ???:1] "http: TLS handshake error from 192.168.126.11:41666: no serving certificate available for the kubelet" Dec 09 14:24:52 crc kubenswrapper[5116]: I1209 14:24:52.167146 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:24:52 crc kubenswrapper[5116]: I1209 14:24:52.167928 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:22 crc kubenswrapper[5116]: I1209 14:25:22.166903 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:25:22 crc kubenswrapper[5116]: I1209 14:25:22.167574 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.167340 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.168022 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.168078 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.168596 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63fa14cef65c6ac709b2413472d850235cc43d843f7e025a50cc9050b6ff3247"} pod="openshift-machine-config-operator/machine-config-daemon-phdhk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.168650 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" containerID="cri-o://63fa14cef65c6ac709b2413472d850235cc43d843f7e025a50cc9050b6ff3247" gracePeriod=600 Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.300143 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.508276 5116 generic.go:358] "Generic (PLEG): container finished" podID="140ab739-f0e3-4429-8e23-03782755777d" containerID="63fa14cef65c6ac709b2413472d850235cc43d843f7e025a50cc9050b6ff3247" exitCode=0 Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.508340 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerDied","Data":"63fa14cef65c6ac709b2413472d850235cc43d843f7e025a50cc9050b6ff3247"} Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.508672 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"7315817fcea3499a7635fc1289860612fc7524332d75eb55b4b0ddd1ffdb8798"} Dec 09 14:25:52 crc kubenswrapper[5116]: I1209 14:25:52.508699 5116 scope.go:117] "RemoveContainer" containerID="17f6fa581e6c8bab2a2df410743179402ba025d782243947d0021133fa9c7873" Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.423900 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm"] Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.427995 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="kube-rbac-proxy" containerID="cri-o://e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.428184 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="ovnkube-cluster-manager" containerID="cri-o://a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.577196 5116 generic.go:358] "Generic (PLEG): container finished" podID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerID="a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065" exitCode=0 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.577226 5116 generic.go:358] "Generic (PLEG): container finished" podID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerID="e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b" exitCode=0 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.577268 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" event={"ID":"59bb7bf4-a83e-4d96-87b6-b2e4235e1620","Type":"ContainerDied","Data":"a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065"} Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.577292 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" event={"ID":"59bb7bf4-a83e-4d96-87b6-b2e4235e1620","Type":"ContainerDied","Data":"e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b"} Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.633516 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tg8rn"] Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634004 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-controller" containerID="cri-o://5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634064 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="sbdb" containerID="cri-o://2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634122 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="nbdb" containerID="cri-o://208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634135 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-acl-logging" containerID="cri-o://73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634212 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-node" containerID="cri-o://f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634082 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.634291 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="northd" containerID="cri-o://4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" gracePeriod=30 Dec 09 14:26:02 crc kubenswrapper[5116]: I1209 14:26:02.663672 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovnkube-controller" containerID="cri-o://a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" gracePeriod=30 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.063559 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.100797 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5"] Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.101528 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="ovnkube-cluster-manager" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.101554 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="ovnkube-cluster-manager" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.101571 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="kube-rbac-proxy" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.101579 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="kube-rbac-proxy" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.101692 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="ovnkube-cluster-manager" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.101711 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" containerName="kube-rbac-proxy" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.111809 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.125221 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovnkube-config\") pod \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.125274 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovn-control-plane-metrics-cert\") pod \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.125368 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-env-overrides\") pod \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.126005 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmnt\" (UniqueName: \"kubernetes.io/projected/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-kube-api-access-cvmnt\") pod \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\" (UID: \"59bb7bf4-a83e-4d96-87b6-b2e4235e1620\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.127198 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "59bb7bf4-a83e-4d96-87b6-b2e4235e1620" (UID: "59bb7bf4-a83e-4d96-87b6-b2e4235e1620"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.127722 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.129267 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "59bb7bf4-a83e-4d96-87b6-b2e4235e1620" (UID: "59bb7bf4-a83e-4d96-87b6-b2e4235e1620"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.132911 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-kube-api-access-cvmnt" (OuterVolumeSpecName: "kube-api-access-cvmnt") pod "59bb7bf4-a83e-4d96-87b6-b2e4235e1620" (UID: "59bb7bf4-a83e-4d96-87b6-b2e4235e1620"). InnerVolumeSpecName "kube-api-access-cvmnt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.133204 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "59bb7bf4-a83e-4d96-87b6-b2e4235e1620" (UID: "59bb7bf4-a83e-4d96-87b6-b2e4235e1620"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.228936 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghxz\" (UniqueName: \"kubernetes.io/projected/3d133ab5-92cb-41a3-8751-18a5329a0bbf-kube-api-access-7ghxz\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.229010 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d133ab5-92cb-41a3-8751-18a5329a0bbf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.229048 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d133ab5-92cb-41a3-8751-18a5329a0bbf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.229115 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d133ab5-92cb-41a3-8751-18a5329a0bbf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.229214 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.229229 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.229242 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvmnt\" (UniqueName: \"kubernetes.io/projected/59bb7bf4-a83e-4d96-87b6-b2e4235e1620-kube-api-access-cvmnt\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.308339 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tg8rn_0df855a1-8389-4874-a68c-de5f76fe650a/ovn-acl-logging/0.log" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.309268 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tg8rn_0df855a1-8389-4874-a68c-de5f76fe650a/ovn-controller/0.log" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.310083 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.330476 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d133ab5-92cb-41a3-8751-18a5329a0bbf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.330823 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d133ab5-92cb-41a3-8751-18a5329a0bbf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.330991 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghxz\" (UniqueName: \"kubernetes.io/projected/3d133ab5-92cb-41a3-8751-18a5329a0bbf-kube-api-access-7ghxz\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.331139 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d133ab5-92cb-41a3-8751-18a5329a0bbf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.332032 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d133ab5-92cb-41a3-8751-18a5329a0bbf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.332057 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d133ab5-92cb-41a3-8751-18a5329a0bbf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.335600 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d133ab5-92cb-41a3-8751-18a5329a0bbf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.354493 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghxz\" (UniqueName: \"kubernetes.io/projected/3d133ab5-92cb-41a3-8751-18a5329a0bbf-kube-api-access-7ghxz\") pod \"ovnkube-control-plane-97c9b6c48-fvqc5\" (UID: \"3d133ab5-92cb-41a3-8751-18a5329a0bbf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.372728 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8rr8m"] Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373370 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373393 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373417 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-node" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373425 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-node" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373441 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-controller" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373449 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-controller" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373461 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="nbdb" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373468 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="nbdb" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373482 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kubecfg-setup" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373490 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kubecfg-setup" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373500 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="northd" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373507 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="northd" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373516 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="sbdb" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373524 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="sbdb" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373534 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovnkube-controller" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373541 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovnkube-controller" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373552 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-acl-logging" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373560 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-acl-logging" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373667 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="northd" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373681 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373694 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="sbdb" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373702 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="kube-rbac-proxy-node" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373713 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-acl-logging" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373722 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovnkube-controller" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373733 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="nbdb" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.373744 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" containerName="ovn-controller" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.380438 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432717 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-log-socket\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432779 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-etc-openvswitch\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432827 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-var-lib-openvswitch\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432865 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-config\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432896 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-systemd-units\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432930 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-systemd\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432854 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-log-socket" (OuterVolumeSpecName: "log-socket") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432889 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432938 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.432998 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt5f7\" (UniqueName: \"kubernetes.io/projected/0df855a1-8389-4874-a68c-de5f76fe650a-kube-api-access-xt5f7\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433014 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433054 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-netns\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433124 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-netd\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433187 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-ovn-kubernetes\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433241 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-script-lib\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433269 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-openvswitch\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433303 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-bin\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433314 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433344 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433360 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0df855a1-8389-4874-a68c-de5f76fe650a-ovn-node-metrics-cert\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433406 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-ovn\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433453 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-kubelet\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433503 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-env-overrides\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433510 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433527 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-node-log\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433556 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433562 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-slash\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433580 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433603 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433605 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0df855a1-8389-4874-a68c-de5f76fe650a\" (UID: \"0df855a1-8389-4874-a68c-de5f76fe650a\") " Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433671 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433727 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-node-log" (OuterVolumeSpecName: "node-log") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433834 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433871 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.433907 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-slash" (OuterVolumeSpecName: "host-slash") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434046 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434134 5116 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434152 5116 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434166 5116 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434180 5116 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434193 5116 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434205 5116 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434218 5116 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434231 5116 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434242 5116 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434254 5116 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434265 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434277 5116 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434289 5116 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434300 5116 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434311 5116 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.434442 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.436129 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.436911 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df855a1-8389-4874-a68c-de5f76fe650a-kube-api-access-xt5f7" (OuterVolumeSpecName: "kube-api-access-xt5f7") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "kube-api-access-xt5f7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.437212 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df855a1-8389-4874-a68c-de5f76fe650a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.444188 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0df855a1-8389-4874-a68c-de5f76fe650a" (UID: "0df855a1-8389-4874-a68c-de5f76fe650a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.535835 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-var-lib-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.535894 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-etc-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.535921 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-run-netns\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536034 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovnkube-script-lib\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536119 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-systemd\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536223 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536250 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536271 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536296 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bfqr\" (UniqueName: \"kubernetes.io/projected/a97e0a55-e5a3-4338-865b-98007dae1f6c-kube-api-access-8bfqr\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536396 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-slash\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536444 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-cni-bin\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536529 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-ovn\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536578 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovn-node-metrics-cert\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536597 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-env-overrides\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536617 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-kubelet\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536663 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-node-log\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536697 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-cni-netd\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536744 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-systemd-units\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536794 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovnkube-config\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536828 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-log-socket\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536939 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.536989 5116 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0df855a1-8389-4874-a68c-de5f76fe650a-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.537003 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xt5f7\" (UniqueName: \"kubernetes.io/projected/0df855a1-8389-4874-a68c-de5f76fe650a-kube-api-access-xt5f7\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.537021 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0df855a1-8389-4874-a68c-de5f76fe650a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.537032 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0df855a1-8389-4874-a68c-de5f76fe650a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.587090 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-554lf_2a441b53-f957-4f01-a123-a96c637c3fe2/kube-multus/0.log" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.587136 5116 generic.go:358] "Generic (PLEG): container finished" podID="2a441b53-f957-4f01-a123-a96c637c3fe2" containerID="e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856" exitCode=2 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.587170 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-554lf" event={"ID":"2a441b53-f957-4f01-a123-a96c637c3fe2","Type":"ContainerDied","Data":"e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.587820 5116 scope.go:117] "RemoveContainer" containerID="e073e1587d3400b68ddb85b989b9d18a2fb42a46dc6b3d13b0bac78746521856" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.589807 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" event={"ID":"3d133ab5-92cb-41a3-8751-18a5329a0bbf","Type":"ContainerStarted","Data":"648f0dfca4527f48ff8d465ce19bc54806d48d9b6eee1479c43e4d9c0789dcd5"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.597136 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" event={"ID":"59bb7bf4-a83e-4d96-87b6-b2e4235e1620","Type":"ContainerDied","Data":"0460bd7069b7358e184303b4d03cc73f2b74d738be65e783f5de66863b71bdcc"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.597175 5116 scope.go:117] "RemoveContainer" containerID="a8a0e306bbc6cb46b3b7e2527d4d4cc2429cc3ff41c8a4499dc4baea04e67065" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.597344 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.614975 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tg8rn_0df855a1-8389-4874-a68c-de5f76fe650a/ovn-acl-logging/0.log" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.615581 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tg8rn_0df855a1-8389-4874-a68c-de5f76fe650a/ovn-controller/0.log" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616191 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" exitCode=0 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616216 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" exitCode=0 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616225 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" exitCode=0 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616241 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" exitCode=0 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616250 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" exitCode=0 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616257 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" exitCode=0 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616264 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" exitCode=143 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616272 5116 generic.go:358] "Generic (PLEG): container finished" podID="0df855a1-8389-4874-a68c-de5f76fe650a" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" exitCode=143 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616264 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616337 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616359 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616371 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616384 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616394 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616404 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616413 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616418 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616423 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616428 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616435 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616442 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616337 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616449 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616567 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616582 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616588 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616593 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616597 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616602 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616607 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616623 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616641 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616646 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616651 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616656 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616660 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616665 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616675 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616680 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616685 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616692 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tg8rn" event={"ID":"0df855a1-8389-4874-a68c-de5f76fe650a","Type":"ContainerDied","Data":"a1ddadc499fe145d5e1ed1493e56821ebac67c8049dbd9c5aa533d6d5abb8eb0"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616700 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616707 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616712 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616716 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616721 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616727 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616732 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616737 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.616742 5116 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.623467 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm"] Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.628009 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w69sm"] Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.630560 5116 scope.go:117] "RemoveContainer" containerID="e633faf99dd323e4f9562a8f88222e3f04f20f01fc7841216ce1947aee44747b" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638248 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638280 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638298 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638313 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8bfqr\" (UniqueName: \"kubernetes.io/projected/a97e0a55-e5a3-4338-865b-98007dae1f6c-kube-api-access-8bfqr\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638335 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-slash\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638352 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-cni-bin\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638367 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-ovn\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638386 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovn-node-metrics-cert\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638404 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-env-overrides\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638419 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-kubelet\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638435 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-node-log\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638452 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-cni-netd\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638474 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-systemd-units\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638491 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovnkube-config\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638505 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-log-socket\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638526 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-var-lib-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638550 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-etc-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638569 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-run-netns\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638590 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovnkube-script-lib\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638608 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-systemd\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638689 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-systemd\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638720 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638741 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638761 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638802 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-run-netns\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.638835 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-cni-netd\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639346 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-slash\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639379 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-cni-bin\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639402 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-run-ovn\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639430 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovnkube-script-lib\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639469 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-host-kubelet\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639767 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-env-overrides\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639809 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-node-log\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.639834 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-systemd-units\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.640221 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovnkube-config\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.640260 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-log-socket\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.640281 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-var-lib-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.640301 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97e0a55-e5a3-4338-865b-98007dae1f6c-etc-openvswitch\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.651403 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97e0a55-e5a3-4338-865b-98007dae1f6c-ovn-node-metrics-cert\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.656487 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bfqr\" (UniqueName: \"kubernetes.io/projected/a97e0a55-e5a3-4338-865b-98007dae1f6c-kube-api-access-8bfqr\") pod \"ovnkube-node-8rr8m\" (UID: \"a97e0a55-e5a3-4338-865b-98007dae1f6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.679924 5116 scope.go:117] "RemoveContainer" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.702705 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.705862 5116 scope.go:117] "RemoveContainer" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.716486 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tg8rn"] Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.719983 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tg8rn"] Dec 09 14:26:03 crc kubenswrapper[5116]: W1209 14:26:03.736090 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97e0a55_e5a3_4338_865b_98007dae1f6c.slice/crio-96eee02c5da259928d947211a44f2239bec061717424cae06bc1228a04ffdd13 WatchSource:0}: Error finding container 96eee02c5da259928d947211a44f2239bec061717424cae06bc1228a04ffdd13: Status 404 returned error can't find the container with id 96eee02c5da259928d947211a44f2239bec061717424cae06bc1228a04ffdd13 Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.757997 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df855a1-8389-4874-a68c-de5f76fe650a" path="/var/lib/kubelet/pods/0df855a1-8389-4874-a68c-de5f76fe650a/volumes" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.759195 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59bb7bf4-a83e-4d96-87b6-b2e4235e1620" path="/var/lib/kubelet/pods/59bb7bf4-a83e-4d96-87b6-b2e4235e1620/volumes" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.770863 5116 scope.go:117] "RemoveContainer" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.816943 5116 scope.go:117] "RemoveContainer" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.837005 5116 scope.go:117] "RemoveContainer" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.859292 5116 scope.go:117] "RemoveContainer" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.879221 5116 scope.go:117] "RemoveContainer" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.944236 5116 scope.go:117] "RemoveContainer" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.957632 5116 scope.go:117] "RemoveContainer" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.970670 5116 scope.go:117] "RemoveContainer" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.971217 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": container with ID starting with a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51 not found: ID does not exist" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.971254 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} err="failed to get container status \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": rpc error: code = NotFound desc = could not find container \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": container with ID starting with a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.971280 5116 scope.go:117] "RemoveContainer" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.971569 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": container with ID starting with 2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094 not found: ID does not exist" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.971596 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} err="failed to get container status \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": rpc error: code = NotFound desc = could not find container \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": container with ID starting with 2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.971613 5116 scope.go:117] "RemoveContainer" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.971859 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": container with ID starting with 208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4 not found: ID does not exist" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.971886 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} err="failed to get container status \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": rpc error: code = NotFound desc = could not find container \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": container with ID starting with 208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.971904 5116 scope.go:117] "RemoveContainer" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.972143 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": container with ID starting with 4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a not found: ID does not exist" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972163 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} err="failed to get container status \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": rpc error: code = NotFound desc = could not find container \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": container with ID starting with 4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972175 5116 scope.go:117] "RemoveContainer" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.972348 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": container with ID starting with 9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781 not found: ID does not exist" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972364 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} err="failed to get container status \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": rpc error: code = NotFound desc = could not find container \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": container with ID starting with 9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972375 5116 scope.go:117] "RemoveContainer" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.972533 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": container with ID starting with f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451 not found: ID does not exist" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972548 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} err="failed to get container status \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": rpc error: code = NotFound desc = could not find container \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": container with ID starting with f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972558 5116 scope.go:117] "RemoveContainer" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.972735 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": container with ID starting with 73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76 not found: ID does not exist" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972760 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} err="failed to get container status \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": rpc error: code = NotFound desc = could not find container \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": container with ID starting with 73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.972775 5116 scope.go:117] "RemoveContainer" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.973038 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": container with ID starting with 5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397 not found: ID does not exist" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973061 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} err="failed to get container status \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": rpc error: code = NotFound desc = could not find container \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": container with ID starting with 5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973075 5116 scope.go:117] "RemoveContainer" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" Dec 09 14:26:03 crc kubenswrapper[5116]: E1209 14:26:03.973320 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": container with ID starting with 5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3 not found: ID does not exist" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973343 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} err="failed to get container status \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": rpc error: code = NotFound desc = could not find container \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": container with ID starting with 5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973357 5116 scope.go:117] "RemoveContainer" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973549 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} err="failed to get container status \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": rpc error: code = NotFound desc = could not find container \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": container with ID starting with a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973575 5116 scope.go:117] "RemoveContainer" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973739 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} err="failed to get container status \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": rpc error: code = NotFound desc = could not find container \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": container with ID starting with 2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.973756 5116 scope.go:117] "RemoveContainer" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974127 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} err="failed to get container status \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": rpc error: code = NotFound desc = could not find container \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": container with ID starting with 208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974148 5116 scope.go:117] "RemoveContainer" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974402 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} err="failed to get container status \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": rpc error: code = NotFound desc = could not find container \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": container with ID starting with 4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974424 5116 scope.go:117] "RemoveContainer" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974621 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} err="failed to get container status \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": rpc error: code = NotFound desc = could not find container \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": container with ID starting with 9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974638 5116 scope.go:117] "RemoveContainer" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974804 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} err="failed to get container status \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": rpc error: code = NotFound desc = could not find container \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": container with ID starting with f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.974818 5116 scope.go:117] "RemoveContainer" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975137 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} err="failed to get container status \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": rpc error: code = NotFound desc = could not find container \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": container with ID starting with 73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975152 5116 scope.go:117] "RemoveContainer" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975319 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} err="failed to get container status \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": rpc error: code = NotFound desc = could not find container \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": container with ID starting with 5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975332 5116 scope.go:117] "RemoveContainer" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975528 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} err="failed to get container status \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": rpc error: code = NotFound desc = could not find container \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": container with ID starting with 5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975549 5116 scope.go:117] "RemoveContainer" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975775 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} err="failed to get container status \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": rpc error: code = NotFound desc = could not find container \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": container with ID starting with a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.975794 5116 scope.go:117] "RemoveContainer" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976035 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} err="failed to get container status \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": rpc error: code = NotFound desc = could not find container \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": container with ID starting with 2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976059 5116 scope.go:117] "RemoveContainer" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976472 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} err="failed to get container status \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": rpc error: code = NotFound desc = could not find container \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": container with ID starting with 208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976497 5116 scope.go:117] "RemoveContainer" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976768 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} err="failed to get container status \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": rpc error: code = NotFound desc = could not find container \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": container with ID starting with 4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976783 5116 scope.go:117] "RemoveContainer" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.976994 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} err="failed to get container status \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": rpc error: code = NotFound desc = could not find container \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": container with ID starting with 9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977018 5116 scope.go:117] "RemoveContainer" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977343 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} err="failed to get container status \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": rpc error: code = NotFound desc = could not find container \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": container with ID starting with f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977368 5116 scope.go:117] "RemoveContainer" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977599 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} err="failed to get container status \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": rpc error: code = NotFound desc = could not find container \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": container with ID starting with 73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977613 5116 scope.go:117] "RemoveContainer" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977943 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} err="failed to get container status \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": rpc error: code = NotFound desc = could not find container \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": container with ID starting with 5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.977978 5116 scope.go:117] "RemoveContainer" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.978281 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} err="failed to get container status \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": rpc error: code = NotFound desc = could not find container \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": container with ID starting with 5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.978305 5116 scope.go:117] "RemoveContainer" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.978523 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} err="failed to get container status \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": rpc error: code = NotFound desc = could not find container \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": container with ID starting with a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.978545 5116 scope.go:117] "RemoveContainer" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.978899 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} err="failed to get container status \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": rpc error: code = NotFound desc = could not find container \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": container with ID starting with 2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.978941 5116 scope.go:117] "RemoveContainer" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.979226 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} err="failed to get container status \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": rpc error: code = NotFound desc = could not find container \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": container with ID starting with 208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.979243 5116 scope.go:117] "RemoveContainer" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.979470 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} err="failed to get container status \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": rpc error: code = NotFound desc = could not find container \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": container with ID starting with 4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.979486 5116 scope.go:117] "RemoveContainer" containerID="9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980093 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781"} err="failed to get container status \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": rpc error: code = NotFound desc = could not find container \"9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781\": container with ID starting with 9b36f92b140f1b8f985ace47148441933336e55f252ea6814ea2eba030901781 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980125 5116 scope.go:117] "RemoveContainer" containerID="f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980419 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451"} err="failed to get container status \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": rpc error: code = NotFound desc = could not find container \"f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451\": container with ID starting with f8cd737f23d18e9e09726103f8292e985ead63a3ac778db521135a299a1b0451 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980443 5116 scope.go:117] "RemoveContainer" containerID="73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980692 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76"} err="failed to get container status \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": rpc error: code = NotFound desc = could not find container \"73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76\": container with ID starting with 73edb20853038c76bc5377a72d3cbdf646dad4ab8910dde636f5fa7cd33bec76 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980713 5116 scope.go:117] "RemoveContainer" containerID="5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980930 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397"} err="failed to get container status \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": rpc error: code = NotFound desc = could not find container \"5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397\": container with ID starting with 5787c6aaf7c56dc1e1f7b93cca2085b9726329ed5156b0b8bd4512d3877df397 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.980969 5116 scope.go:117] "RemoveContainer" containerID="5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.981238 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3"} err="failed to get container status \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": rpc error: code = NotFound desc = could not find container \"5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3\": container with ID starting with 5c63e366b2d151b8129d1c2ad99d0066106eaf946d7e57dfdd76913424df4ae3 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.981259 5116 scope.go:117] "RemoveContainer" containerID="a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.981519 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51"} err="failed to get container status \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": rpc error: code = NotFound desc = could not find container \"a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51\": container with ID starting with a08d7e532cdf304a4e8fb8c6c74067f44e365393fd2abddd594f3fe965d46b51 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.981540 5116 scope.go:117] "RemoveContainer" containerID="2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.982478 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094"} err="failed to get container status \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": rpc error: code = NotFound desc = could not find container \"2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094\": container with ID starting with 2b42371139cdf0c0342685328ba9223d7118daae988bfad249e33c1656192094 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.982503 5116 scope.go:117] "RemoveContainer" containerID="208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.982822 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4"} err="failed to get container status \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": rpc error: code = NotFound desc = could not find container \"208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4\": container with ID starting with 208fa8bc04b899b3935302be57e1b5310c58dbe8805b7dc53ebdb1df404fc4a4 not found: ID does not exist" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.982843 5116 scope.go:117] "RemoveContainer" containerID="4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a" Dec 09 14:26:03 crc kubenswrapper[5116]: I1209 14:26:03.983155 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a"} err="failed to get container status \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": rpc error: code = NotFound desc = could not find container \"4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a\": container with ID starting with 4ac1a5f4d870f164fa8f751e783705fa58dc1983ac1f85847b2ccce0f6e82f3a not found: ID does not exist" Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.629469 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-554lf_2a441b53-f957-4f01-a123-a96c637c3fe2/kube-multus/0.log" Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.629873 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-554lf" event={"ID":"2a441b53-f957-4f01-a123-a96c637c3fe2","Type":"ContainerStarted","Data":"61e7e9cb267f080866b84881654054a767a13844108f3e6b50cd7e6d694dd91a"} Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.632227 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" event={"ID":"3d133ab5-92cb-41a3-8751-18a5329a0bbf","Type":"ContainerStarted","Data":"d107044d5d9b5adbcf145b833bddf73e918f3801cff2bf30b8218749d5922afd"} Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.632255 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" event={"ID":"3d133ab5-92cb-41a3-8751-18a5329a0bbf","Type":"ContainerStarted","Data":"0378909213ec9d4caf4dd1f673ed2bb140ae4389006b64f245a31412c807701f"} Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.638267 5116 generic.go:358] "Generic (PLEG): container finished" podID="a97e0a55-e5a3-4338-865b-98007dae1f6c" containerID="5a4a6c67e65bef9a3935fd000ef9d830b3d0858a3b58562107ee6dd84ad42551" exitCode=0 Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.638368 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerDied","Data":"5a4a6c67e65bef9a3935fd000ef9d830b3d0858a3b58562107ee6dd84ad42551"} Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.638393 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"96eee02c5da259928d947211a44f2239bec061717424cae06bc1228a04ffdd13"} Dec 09 14:26:04 crc kubenswrapper[5116]: I1209 14:26:04.668787 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-fvqc5" podStartSLOduration=2.668759231 podStartE2EDuration="2.668759231s" podCreationTimestamp="2025-12-09 14:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:26:04.660180779 +0000 UTC m=+703.181925627" watchObservedRunningTime="2025-12-09 14:26:04.668759231 +0000 UTC m=+703.190504069" Dec 09 14:26:05 crc kubenswrapper[5116]: I1209 14:26:05.650339 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"98ca5cdb64faef75c209f541ff8ac882f306ad517c879b0c93513031fbbacfd5"} Dec 09 14:26:05 crc kubenswrapper[5116]: I1209 14:26:05.650387 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"a2ecb565f00e6a9e4742162af31319739cbb705ad54eeca4d5502aae81cf4368"} Dec 09 14:26:05 crc kubenswrapper[5116]: I1209 14:26:05.650403 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"1ba4feeaa902b23f0093f8edb22a646a98ac323547ef4f3ee0da8a313c08d51f"} Dec 09 14:26:05 crc kubenswrapper[5116]: I1209 14:26:05.650414 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"9965e28171dce738855d86310ff49ffb41ede50fd027d815b006ebde0f814b67"} Dec 09 14:26:05 crc kubenswrapper[5116]: I1209 14:26:05.650425 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"93504d296be1a8d7cd6e134764e51c7b6523631e4d73a8e4f2367319f59fb225"} Dec 09 14:26:06 crc kubenswrapper[5116]: I1209 14:26:06.661899 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"bc82dae9aac28b047bf04b8fe158518d2e9140214af535060739173b8707e61e"} Dec 09 14:26:08 crc kubenswrapper[5116]: I1209 14:26:08.685438 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"5d2dca011fa9c94e198017d79118380b66b79f16dd0407d651d8ae582508417e"} Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.708559 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" event={"ID":"a97e0a55-e5a3-4338-865b-98007dae1f6c","Type":"ContainerStarted","Data":"24347d300c941c797f2d7ec52eb3bebecb11df2f2a71896f165b10c149cdcebb"} Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.710758 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.712087 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.712106 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.775011 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.775094 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:11 crc kubenswrapper[5116]: I1209 14:26:11.811513 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" podStartSLOduration=8.811495808 podStartE2EDuration="8.811495808s" podCreationTimestamp="2025-12-09 14:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:26:11.751364818 +0000 UTC m=+710.273109656" watchObservedRunningTime="2025-12-09 14:26:11.811495808 +0000 UTC m=+710.333240616" Dec 09 14:26:34 crc kubenswrapper[5116]: I1209 14:26:34.809086 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kq6lh"] Dec 09 14:26:34 crc kubenswrapper[5116]: I1209 14:26:34.827559 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:34 crc kubenswrapper[5116]: I1209 14:26:34.838520 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq6lh"] Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.000643 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-utilities\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.000702 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-catalog-content\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.000736 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flxn\" (UniqueName: \"kubernetes.io/projected/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-kube-api-access-4flxn\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.102176 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-utilities\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.102227 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-catalog-content\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.102264 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4flxn\" (UniqueName: \"kubernetes.io/projected/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-kube-api-access-4flxn\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.102950 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-utilities\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.103006 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-catalog-content\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.121202 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4flxn\" (UniqueName: \"kubernetes.io/projected/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-kube-api-access-4flxn\") pod \"redhat-marketplace-kq6lh\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.145987 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.321457 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq6lh"] Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.875106 5116 generic.go:358] "Generic (PLEG): container finished" podID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerID="b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c" exitCode=0 Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.875246 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq6lh" event={"ID":"969f6ee2-a2b0-4764-b22c-08d2478ca6e2","Type":"ContainerDied","Data":"b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c"} Dec 09 14:26:35 crc kubenswrapper[5116]: I1209 14:26:35.875324 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq6lh" event={"ID":"969f6ee2-a2b0-4764-b22c-08d2478ca6e2","Type":"ContainerStarted","Data":"dec7259489e24bf306a0c6a7d52355560083e14ea5f61f7392a90a48b7321296"} Dec 09 14:26:36 crc kubenswrapper[5116]: I1209 14:26:36.888090 5116 generic.go:358] "Generic (PLEG): container finished" podID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerID="c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a" exitCode=0 Dec 09 14:26:36 crc kubenswrapper[5116]: I1209 14:26:36.888559 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq6lh" event={"ID":"969f6ee2-a2b0-4764-b22c-08d2478ca6e2","Type":"ContainerDied","Data":"c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a"} Dec 09 14:26:37 crc kubenswrapper[5116]: I1209 14:26:37.896589 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq6lh" event={"ID":"969f6ee2-a2b0-4764-b22c-08d2478ca6e2","Type":"ContainerStarted","Data":"decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15"} Dec 09 14:26:37 crc kubenswrapper[5116]: I1209 14:26:37.918735 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kq6lh" podStartSLOduration=3.297364346 podStartE2EDuration="3.918717879s" podCreationTimestamp="2025-12-09 14:26:34 +0000 UTC" firstStartedPulling="2025-12-09 14:26:35.876883077 +0000 UTC m=+734.398627875" lastFinishedPulling="2025-12-09 14:26:36.49823661 +0000 UTC m=+735.019981408" observedRunningTime="2025-12-09 14:26:37.914732775 +0000 UTC m=+736.436477573" watchObservedRunningTime="2025-12-09 14:26:37.918717879 +0000 UTC m=+736.440462687" Dec 09 14:26:43 crc kubenswrapper[5116]: I1209 14:26:43.760584 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8rr8m" Dec 09 14:26:45 crc kubenswrapper[5116]: I1209 14:26:45.147017 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:45 crc kubenswrapper[5116]: I1209 14:26:45.147071 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:45 crc kubenswrapper[5116]: I1209 14:26:45.201639 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:46 crc kubenswrapper[5116]: I1209 14:26:46.020065 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:46 crc kubenswrapper[5116]: I1209 14:26:46.077148 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq6lh"] Dec 09 14:26:47 crc kubenswrapper[5116]: I1209 14:26:47.955853 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kq6lh" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="registry-server" containerID="cri-o://decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15" gracePeriod=2 Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.405284 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.484164 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4flxn\" (UniqueName: \"kubernetes.io/projected/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-kube-api-access-4flxn\") pod \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.484249 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-utilities\") pod \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.484343 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-catalog-content\") pod \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\" (UID: \"969f6ee2-a2b0-4764-b22c-08d2478ca6e2\") " Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.485486 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-utilities" (OuterVolumeSpecName: "utilities") pod "969f6ee2-a2b0-4764-b22c-08d2478ca6e2" (UID: "969f6ee2-a2b0-4764-b22c-08d2478ca6e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.489530 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-kube-api-access-4flxn" (OuterVolumeSpecName: "kube-api-access-4flxn") pod "969f6ee2-a2b0-4764-b22c-08d2478ca6e2" (UID: "969f6ee2-a2b0-4764-b22c-08d2478ca6e2"). InnerVolumeSpecName "kube-api-access-4flxn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.496127 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "969f6ee2-a2b0-4764-b22c-08d2478ca6e2" (UID: "969f6ee2-a2b0-4764-b22c-08d2478ca6e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.586008 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.586398 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4flxn\" (UniqueName: \"kubernetes.io/projected/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-kube-api-access-4flxn\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:48 crc kubenswrapper[5116]: I1209 14:26:48.586767 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969f6ee2-a2b0-4764-b22c-08d2478ca6e2-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.194893 5116 generic.go:358] "Generic (PLEG): container finished" podID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerID="decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15" exitCode=0 Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.195077 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq6lh" event={"ID":"969f6ee2-a2b0-4764-b22c-08d2478ca6e2","Type":"ContainerDied","Data":"decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15"} Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.195107 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq6lh" event={"ID":"969f6ee2-a2b0-4764-b22c-08d2478ca6e2","Type":"ContainerDied","Data":"dec7259489e24bf306a0c6a7d52355560083e14ea5f61f7392a90a48b7321296"} Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.195128 5116 scope.go:117] "RemoveContainer" containerID="decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.195262 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq6lh" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.234213 5116 scope.go:117] "RemoveContainer" containerID="c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.236268 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq6lh"] Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.245032 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq6lh"] Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.255825 5116 scope.go:117] "RemoveContainer" containerID="b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.289540 5116 scope.go:117] "RemoveContainer" containerID="decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15" Dec 09 14:26:49 crc kubenswrapper[5116]: E1209 14:26:49.290027 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15\": container with ID starting with decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15 not found: ID does not exist" containerID="decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.290121 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15"} err="failed to get container status \"decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15\": rpc error: code = NotFound desc = could not find container \"decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15\": container with ID starting with decd5b31beaddc87dcd4765d64b87a712d6649da14c0a27b23aefde1dee6cf15 not found: ID does not exist" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.290223 5116 scope.go:117] "RemoveContainer" containerID="c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a" Dec 09 14:26:49 crc kubenswrapper[5116]: E1209 14:26:49.290698 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a\": container with ID starting with c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a not found: ID does not exist" containerID="c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.290724 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a"} err="failed to get container status \"c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a\": rpc error: code = NotFound desc = could not find container \"c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a\": container with ID starting with c8c73bd5a233a551e3f1fd92e6f9bcc1c739fd00c30b0f0c37f510452ffd051a not found: ID does not exist" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.290742 5116 scope.go:117] "RemoveContainer" containerID="b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c" Dec 09 14:26:49 crc kubenswrapper[5116]: E1209 14:26:49.291173 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c\": container with ID starting with b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c not found: ID does not exist" containerID="b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.291259 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c"} err="failed to get container status \"b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c\": rpc error: code = NotFound desc = could not find container \"b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c\": container with ID starting with b2d69efb863bb3465243b8c0de2fb5b014178f0e7269a81f2c6c27726e88df1c not found: ID does not exist" Dec 09 14:26:49 crc kubenswrapper[5116]: I1209 14:26:49.760466 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" path="/var/lib/kubelet/pods/969f6ee2-a2b0-4764-b22c-08d2478ca6e2/volumes" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.678250 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2w6wx"] Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.680786 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="extract-content" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.680826 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="extract-content" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.680856 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="registry-server" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.680873 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="registry-server" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.681029 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="extract-utilities" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.681049 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="extract-utilities" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.681274 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="969f6ee2-a2b0-4764-b22c-08d2478ca6e2" containerName="registry-server" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.697465 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2w6wx"] Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.697707 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.762912 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-catalog-content\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.762987 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-utilities\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.763030 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krv89\" (UniqueName: \"kubernetes.io/projected/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-kube-api-access-krv89\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.864163 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krv89\" (UniqueName: \"kubernetes.io/projected/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-kube-api-access-krv89\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.864266 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-catalog-content\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.864301 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-utilities\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.864928 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-utilities\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.865343 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-catalog-content\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:54 crc kubenswrapper[5116]: I1209 14:26:54.883751 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krv89\" (UniqueName: \"kubernetes.io/projected/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-kube-api-access-krv89\") pod \"redhat-operators-2w6wx\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:55 crc kubenswrapper[5116]: I1209 14:26:55.023939 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:26:55 crc kubenswrapper[5116]: I1209 14:26:55.227580 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2w6wx"] Dec 09 14:26:56 crc kubenswrapper[5116]: I1209 14:26:56.239678 5116 generic.go:358] "Generic (PLEG): container finished" podID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerID="dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5" exitCode=0 Dec 09 14:26:56 crc kubenswrapper[5116]: I1209 14:26:56.240019 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerDied","Data":"dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5"} Dec 09 14:26:56 crc kubenswrapper[5116]: I1209 14:26:56.240066 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerStarted","Data":"131ca40f39c6a8c584043c0fb6fc4752660dadfb016d5d00908eaa9eddb31e52"} Dec 09 14:26:57 crc kubenswrapper[5116]: I1209 14:26:57.248554 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerStarted","Data":"4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a"} Dec 09 14:26:58 crc kubenswrapper[5116]: I1209 14:26:58.255759 5116 generic.go:358] "Generic (PLEG): container finished" podID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerID="4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a" exitCode=0 Dec 09 14:26:58 crc kubenswrapper[5116]: I1209 14:26:58.255870 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerDied","Data":"4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a"} Dec 09 14:26:59 crc kubenswrapper[5116]: I1209 14:26:59.265257 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerStarted","Data":"028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a"} Dec 09 14:26:59 crc kubenswrapper[5116]: I1209 14:26:59.304588 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2w6wx" podStartSLOduration=4.561088776 podStartE2EDuration="5.304563278s" podCreationTimestamp="2025-12-09 14:26:54 +0000 UTC" firstStartedPulling="2025-12-09 14:26:56.240984633 +0000 UTC m=+754.762729471" lastFinishedPulling="2025-12-09 14:26:56.984459145 +0000 UTC m=+755.506203973" observedRunningTime="2025-12-09 14:26:59.302257658 +0000 UTC m=+757.824002466" watchObservedRunningTime="2025-12-09 14:26:59.304563278 +0000 UTC m=+757.826308106" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.065265 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-npvwf"] Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.099719 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npvwf"] Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.099843 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.163743 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-utilities\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.163858 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnj5\" (UniqueName: \"kubernetes.io/projected/010057fc-c8fe-41ef-8c17-561f0c53399c-kube-api-access-xhnj5\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.164195 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-catalog-content\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.266024 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-utilities\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.266103 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnj5\" (UniqueName: \"kubernetes.io/projected/010057fc-c8fe-41ef-8c17-561f0c53399c-kube-api-access-xhnj5\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.266282 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-catalog-content\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.266587 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-utilities\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.267121 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-catalog-content\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.292397 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnj5\" (UniqueName: \"kubernetes.io/projected/010057fc-c8fe-41ef-8c17-561f0c53399c-kube-api-access-xhnj5\") pod \"certified-operators-npvwf\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.429946 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:02 crc kubenswrapper[5116]: I1209 14:27:02.644166 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npvwf"] Dec 09 14:27:03 crc kubenswrapper[5116]: I1209 14:27:03.290793 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerStarted","Data":"f4d9be2c4d2fc991732145ac5651458a1a6fec9fe2da1a72d76b26aeca8c638c"} Dec 09 14:27:04 crc kubenswrapper[5116]: I1209 14:27:04.299254 5116 generic.go:358] "Generic (PLEG): container finished" podID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerID="fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13" exitCode=0 Dec 09 14:27:04 crc kubenswrapper[5116]: I1209 14:27:04.299323 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerDied","Data":"fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13"} Dec 09 14:27:05 crc kubenswrapper[5116]: I1209 14:27:05.024489 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:27:05 crc kubenswrapper[5116]: I1209 14:27:05.024541 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:27:05 crc kubenswrapper[5116]: I1209 14:27:05.115620 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:27:05 crc kubenswrapper[5116]: I1209 14:27:05.307826 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerStarted","Data":"cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877"} Dec 09 14:27:05 crc kubenswrapper[5116]: I1209 14:27:05.372823 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:27:06 crc kubenswrapper[5116]: I1209 14:27:06.316403 5116 generic.go:358] "Generic (PLEG): container finished" podID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerID="cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877" exitCode=0 Dec 09 14:27:06 crc kubenswrapper[5116]: I1209 14:27:06.316535 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerDied","Data":"cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877"} Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.055037 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2w6wx"] Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.326821 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerStarted","Data":"5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374"} Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.327088 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2w6wx" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="registry-server" containerID="cri-o://028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a" gracePeriod=2 Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.351327 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-npvwf" podStartSLOduration=4.755310579 podStartE2EDuration="5.351306024s" podCreationTimestamp="2025-12-09 14:27:02 +0000 UTC" firstStartedPulling="2025-12-09 14:27:04.30080368 +0000 UTC m=+762.822548518" lastFinishedPulling="2025-12-09 14:27:04.896799135 +0000 UTC m=+763.418543963" observedRunningTime="2025-12-09 14:27:07.345903774 +0000 UTC m=+765.867648602" watchObservedRunningTime="2025-12-09 14:27:07.351306024 +0000 UTC m=+765.873050832" Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.738875 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.867236 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-catalog-content\") pod \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.867307 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-utilities\") pod \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.867354 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krv89\" (UniqueName: \"kubernetes.io/projected/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-kube-api-access-krv89\") pod \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\" (UID: \"6d2e4af7-1a94-4fb2-9db4-466a2be665fb\") " Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.870415 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-utilities" (OuterVolumeSpecName: "utilities") pod "6d2e4af7-1a94-4fb2-9db4-466a2be665fb" (UID: "6d2e4af7-1a94-4fb2-9db4-466a2be665fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.875374 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-kube-api-access-krv89" (OuterVolumeSpecName: "kube-api-access-krv89") pod "6d2e4af7-1a94-4fb2-9db4-466a2be665fb" (UID: "6d2e4af7-1a94-4fb2-9db4-466a2be665fb"). InnerVolumeSpecName "kube-api-access-krv89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.969416 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:07 crc kubenswrapper[5116]: I1209 14:27:07.969468 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-krv89\" (UniqueName: \"kubernetes.io/projected/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-kube-api-access-krv89\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.334559 5116 generic.go:358] "Generic (PLEG): container finished" podID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerID="028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a" exitCode=0 Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.335390 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2w6wx" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.335728 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerDied","Data":"028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a"} Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.335751 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2w6wx" event={"ID":"6d2e4af7-1a94-4fb2-9db4-466a2be665fb","Type":"ContainerDied","Data":"131ca40f39c6a8c584043c0fb6fc4752660dadfb016d5d00908eaa9eddb31e52"} Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.335767 5116 scope.go:117] "RemoveContainer" containerID="028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.361946 5116 scope.go:117] "RemoveContainer" containerID="4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.382230 5116 scope.go:117] "RemoveContainer" containerID="dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.402179 5116 scope.go:117] "RemoveContainer" containerID="028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a" Dec 09 14:27:08 crc kubenswrapper[5116]: E1209 14:27:08.402758 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a\": container with ID starting with 028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a not found: ID does not exist" containerID="028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.402792 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a"} err="failed to get container status \"028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a\": rpc error: code = NotFound desc = could not find container \"028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a\": container with ID starting with 028f220c03afa2e0411d2a4453dd49889d6fc80beaf737697fb4905660b4b38a not found: ID does not exist" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.402815 5116 scope.go:117] "RemoveContainer" containerID="4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a" Dec 09 14:27:08 crc kubenswrapper[5116]: E1209 14:27:08.404012 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a\": container with ID starting with 4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a not found: ID does not exist" containerID="4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.404058 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a"} err="failed to get container status \"4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a\": rpc error: code = NotFound desc = could not find container \"4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a\": container with ID starting with 4e1be4d6d4c4166ca7ea2829f7afae77d1b76f40aca7a91c7196f8ce29006d0a not found: ID does not exist" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.404081 5116 scope.go:117] "RemoveContainer" containerID="dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5" Dec 09 14:27:08 crc kubenswrapper[5116]: E1209 14:27:08.404384 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5\": container with ID starting with dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5 not found: ID does not exist" containerID="dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.404426 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5"} err="failed to get container status \"dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5\": rpc error: code = NotFound desc = could not find container \"dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5\": container with ID starting with dc20595a727594fca139983356c2041941e8bab78ea6e28ddbc476d5174888a5 not found: ID does not exist" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.743637 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d2e4af7-1a94-4fb2-9db4-466a2be665fb" (UID: "6d2e4af7-1a94-4fb2-9db4-466a2be665fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.792188 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2e4af7-1a94-4fb2-9db4-466a2be665fb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.971801 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2w6wx"] Dec 09 14:27:08 crc kubenswrapper[5116]: I1209 14:27:08.976736 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2w6wx"] Dec 09 14:27:09 crc kubenswrapper[5116]: I1209 14:27:09.761111 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" path="/var/lib/kubelet/pods/6d2e4af7-1a94-4fb2-9db4-466a2be665fb/volumes" Dec 09 14:27:12 crc kubenswrapper[5116]: I1209 14:27:12.430545 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:12 crc kubenswrapper[5116]: I1209 14:27:12.430914 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:12 crc kubenswrapper[5116]: I1209 14:27:12.467631 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:13 crc kubenswrapper[5116]: I1209 14:27:13.432666 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:13 crc kubenswrapper[5116]: I1209 14:27:13.493392 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npvwf"] Dec 09 14:27:15 crc kubenswrapper[5116]: I1209 14:27:15.383724 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-npvwf" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="registry-server" containerID="cri-o://5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374" gracePeriod=2 Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.290483 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.392115 5116 generic.go:358] "Generic (PLEG): container finished" podID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerID="5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374" exitCode=0 Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.392161 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerDied","Data":"5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374"} Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.392200 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npvwf" event={"ID":"010057fc-c8fe-41ef-8c17-561f0c53399c","Type":"ContainerDied","Data":"f4d9be2c4d2fc991732145ac5651458a1a6fec9fe2da1a72d76b26aeca8c638c"} Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.392220 5116 scope.go:117] "RemoveContainer" containerID="5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.392288 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npvwf" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.395335 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-utilities\") pod \"010057fc-c8fe-41ef-8c17-561f0c53399c\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.396387 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-catalog-content\") pod \"010057fc-c8fe-41ef-8c17-561f0c53399c\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.396468 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhnj5\" (UniqueName: \"kubernetes.io/projected/010057fc-c8fe-41ef-8c17-561f0c53399c-kube-api-access-xhnj5\") pod \"010057fc-c8fe-41ef-8c17-561f0c53399c\" (UID: \"010057fc-c8fe-41ef-8c17-561f0c53399c\") " Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.396995 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-utilities" (OuterVolumeSpecName: "utilities") pod "010057fc-c8fe-41ef-8c17-561f0c53399c" (UID: "010057fc-c8fe-41ef-8c17-561f0c53399c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.407260 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010057fc-c8fe-41ef-8c17-561f0c53399c-kube-api-access-xhnj5" (OuterVolumeSpecName: "kube-api-access-xhnj5") pod "010057fc-c8fe-41ef-8c17-561f0c53399c" (UID: "010057fc-c8fe-41ef-8c17-561f0c53399c"). InnerVolumeSpecName "kube-api-access-xhnj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.417559 5116 scope.go:117] "RemoveContainer" containerID="cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.438025 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "010057fc-c8fe-41ef-8c17-561f0c53399c" (UID: "010057fc-c8fe-41ef-8c17-561f0c53399c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.439795 5116 scope.go:117] "RemoveContainer" containerID="fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.457983 5116 scope.go:117] "RemoveContainer" containerID="5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374" Dec 09 14:27:16 crc kubenswrapper[5116]: E1209 14:27:16.458519 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374\": container with ID starting with 5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374 not found: ID does not exist" containerID="5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.458548 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374"} err="failed to get container status \"5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374\": rpc error: code = NotFound desc = could not find container \"5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374\": container with ID starting with 5b7e09d02c24bf209b08447e1f8d140ef7947e5a2a2d7d9b7eb7ec59d3e70374 not found: ID does not exist" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.458567 5116 scope.go:117] "RemoveContainer" containerID="cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877" Dec 09 14:27:16 crc kubenswrapper[5116]: E1209 14:27:16.459080 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877\": container with ID starting with cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877 not found: ID does not exist" containerID="cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.459139 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877"} err="failed to get container status \"cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877\": rpc error: code = NotFound desc = could not find container \"cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877\": container with ID starting with cb6861da79c5349a12311403bbf3e969f2731dcedbe45432755c287c45d00877 not found: ID does not exist" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.459176 5116 scope.go:117] "RemoveContainer" containerID="fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13" Dec 09 14:27:16 crc kubenswrapper[5116]: E1209 14:27:16.459554 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13\": container with ID starting with fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13 not found: ID does not exist" containerID="fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.459582 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13"} err="failed to get container status \"fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13\": rpc error: code = NotFound desc = could not find container \"fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13\": container with ID starting with fd4a38345ccab7a9c0ef74d3c3d332f7b62c08c4ad507b9f2a0c54eebdec6d13 not found: ID does not exist" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.497645 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xhnj5\" (UniqueName: \"kubernetes.io/projected/010057fc-c8fe-41ef-8c17-561f0c53399c-kube-api-access-xhnj5\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.497682 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.497694 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010057fc-c8fe-41ef-8c17-561f0c53399c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.732038 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npvwf"] Dec 09 14:27:16 crc kubenswrapper[5116]: I1209 14:27:16.735583 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-npvwf"] Dec 09 14:27:17 crc kubenswrapper[5116]: I1209 14:27:17.759402 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" path="/var/lib/kubelet/pods/010057fc-c8fe-41ef-8c17-561f0c53399c/volumes" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.280847 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w8st"] Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.281574 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4w8st" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="registry-server" containerID="cri-o://cac66464fbe41e33e9729b6410719d821f97e77d7de83c1844d57273760ce01b" gracePeriod=30 Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.435511 5116 generic.go:358] "Generic (PLEG): container finished" podID="089c6387-062c-492a-926d-6a8793fe453b" containerID="cac66464fbe41e33e9729b6410719d821f97e77d7de83c1844d57273760ce01b" exitCode=0 Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.435597 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w8st" event={"ID":"089c6387-062c-492a-926d-6a8793fe453b","Type":"ContainerDied","Data":"cac66464fbe41e33e9729b6410719d821f97e77d7de83c1844d57273760ce01b"} Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.735832 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.887843 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-catalog-content\") pod \"089c6387-062c-492a-926d-6a8793fe453b\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.887923 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-utilities\") pod \"089c6387-062c-492a-926d-6a8793fe453b\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.887943 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kc8d\" (UniqueName: \"kubernetes.io/projected/089c6387-062c-492a-926d-6a8793fe453b-kube-api-access-4kc8d\") pod \"089c6387-062c-492a-926d-6a8793fe453b\" (UID: \"089c6387-062c-492a-926d-6a8793fe453b\") " Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.890248 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-utilities" (OuterVolumeSpecName: "utilities") pod "089c6387-062c-492a-926d-6a8793fe453b" (UID: "089c6387-062c-492a-926d-6a8793fe453b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.894656 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089c6387-062c-492a-926d-6a8793fe453b-kube-api-access-4kc8d" (OuterVolumeSpecName: "kube-api-access-4kc8d") pod "089c6387-062c-492a-926d-6a8793fe453b" (UID: "089c6387-062c-492a-926d-6a8793fe453b"). InnerVolumeSpecName "kube-api-access-4kc8d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.898244 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "089c6387-062c-492a-926d-6a8793fe453b" (UID: "089c6387-062c-492a-926d-6a8793fe453b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.989859 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.989914 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089c6387-062c-492a-926d-6a8793fe453b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:22 crc kubenswrapper[5116]: I1209 14:27:22.989935 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4kc8d\" (UniqueName: \"kubernetes.io/projected/089c6387-062c-492a-926d-6a8793fe453b-kube-api-access-4kc8d\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.445817 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w8st" event={"ID":"089c6387-062c-492a-926d-6a8793fe453b","Type":"ContainerDied","Data":"fdec22c9a9650df622e34807fad9ca97f3c7432bfab0687fa4c3149133a7565f"} Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.445828 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w8st" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.445902 5116 scope.go:117] "RemoveContainer" containerID="cac66464fbe41e33e9729b6410719d821f97e77d7de83c1844d57273760ce01b" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480073 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-qm9cn"] Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480710 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480735 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480754 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="extract-utilities" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480762 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="extract-utilities" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480779 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="extract-content" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480787 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="extract-content" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480801 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="extract-utilities" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480807 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="extract-utilities" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480818 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="extract-content" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480826 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="extract-content" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480840 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="extract-utilities" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480847 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="extract-utilities" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480863 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480871 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480880 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480887 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480895 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="extract-content" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.480902 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="extract-content" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.481011 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d2e4af7-1a94-4fb2-9db4-466a2be665fb" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.481027 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="010057fc-c8fe-41ef-8c17-561f0c53399c" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.481037 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="089c6387-062c-492a-926d-6a8793fe453b" containerName="registry-server" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.498957 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-qm9cn"] Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.499096 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.499365 5116 scope.go:117] "RemoveContainer" containerID="0dd51a96f882b5d313575bf448390de35e8b0e20ee3d54f50b166fc8991404da" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.506133 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w8st"] Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.511559 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w8st"] Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.519623 5116 scope.go:117] "RemoveContainer" containerID="9607a22f424742c7aac94f4d1c27600ec7bceef87e5f5450fbc9af7a07fae392" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599126 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87144a1-2798-4024-b5c6-c3af6cbd4f86-registry-certificates\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599213 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-registry-tls\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599239 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87144a1-2798-4024-b5c6-c3af6cbd4f86-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599276 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87144a1-2798-4024-b5c6-c3af6cbd4f86-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599299 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87144a1-2798-4024-b5c6-c3af6cbd4f86-trusted-ca\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599324 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnz4\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-kube-api-access-8rnz4\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599382 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.599403 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-bound-sa-token\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.624726 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700733 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-registry-tls\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700780 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87144a1-2798-4024-b5c6-c3af6cbd4f86-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700808 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87144a1-2798-4024-b5c6-c3af6cbd4f86-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700828 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87144a1-2798-4024-b5c6-c3af6cbd4f86-trusted-ca\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700862 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnz4\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-kube-api-access-8rnz4\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700923 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-bound-sa-token\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.700958 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87144a1-2798-4024-b5c6-c3af6cbd4f86-registry-certificates\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.701636 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87144a1-2798-4024-b5c6-c3af6cbd4f86-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.702383 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87144a1-2798-4024-b5c6-c3af6cbd4f86-trusted-ca\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.702651 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87144a1-2798-4024-b5c6-c3af6cbd4f86-registry-certificates\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.705844 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87144a1-2798-4024-b5c6-c3af6cbd4f86-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.706312 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-registry-tls\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.724554 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-bound-sa-token\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.730165 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnz4\" (UniqueName: \"kubernetes.io/projected/f87144a1-2798-4024-b5c6-c3af6cbd4f86-kube-api-access-8rnz4\") pod \"image-registry-5d9d95bf5b-qm9cn\" (UID: \"f87144a1-2798-4024-b5c6-c3af6cbd4f86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.763876 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089c6387-062c-492a-926d-6a8793fe453b" path="/var/lib/kubelet/pods/089c6387-062c-492a-926d-6a8793fe453b/volumes" Dec 09 14:27:23 crc kubenswrapper[5116]: I1209 14:27:23.856206 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:24 crc kubenswrapper[5116]: I1209 14:27:24.309183 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-qm9cn"] Dec 09 14:27:24 crc kubenswrapper[5116]: I1209 14:27:24.454466 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" event={"ID":"f87144a1-2798-4024-b5c6-c3af6cbd4f86","Type":"ContainerStarted","Data":"222844b11dca91eeb7114736c896ed94e15a321cc7203625d9f6ebd51454e65b"} Dec 09 14:27:24 crc kubenswrapper[5116]: I1209 14:27:24.457192 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:24 crc kubenswrapper[5116]: I1209 14:27:24.457527 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" event={"ID":"f87144a1-2798-4024-b5c6-c3af6cbd4f86","Type":"ContainerStarted","Data":"5b50693130bda03bf24ad86346074d8b4b1137218a619a82e406457240bf0892"} Dec 09 14:27:24 crc kubenswrapper[5116]: I1209 14:27:24.476791 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" podStartSLOduration=1.47677543 podStartE2EDuration="1.47677543s" podCreationTimestamp="2025-12-09 14:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:27:24.47637919 +0000 UTC m=+782.998124008" watchObservedRunningTime="2025-12-09 14:27:24.47677543 +0000 UTC m=+782.998520218" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.496639 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb"] Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.507031 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.507123 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb"] Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.514686 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.645909 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.646129 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqbm\" (UniqueName: \"kubernetes.io/projected/775807ff-508f-490a-ba20-c050c4b65e22-kube-api-access-2gqbm\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.646390 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.747403 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.747481 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.747540 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqbm\" (UniqueName: \"kubernetes.io/projected/775807ff-508f-490a-ba20-c050c4b65e22-kube-api-access-2gqbm\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.748075 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.748428 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.774088 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqbm\" (UniqueName: \"kubernetes.io/projected/775807ff-508f-490a-ba20-c050c4b65e22-kube-api-access-2gqbm\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:26 crc kubenswrapper[5116]: I1209 14:27:26.825624 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:27 crc kubenswrapper[5116]: I1209 14:27:27.057741 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb"] Dec 09 14:27:27 crc kubenswrapper[5116]: W1209 14:27:27.066203 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod775807ff_508f_490a_ba20_c050c4b65e22.slice/crio-9110ed72fc2421dee587da90e04366c9dbdd56138308d98715bcbe5390008b08 WatchSource:0}: Error finding container 9110ed72fc2421dee587da90e04366c9dbdd56138308d98715bcbe5390008b08: Status 404 returned error can't find the container with id 9110ed72fc2421dee587da90e04366c9dbdd56138308d98715bcbe5390008b08 Dec 09 14:27:27 crc kubenswrapper[5116]: I1209 14:27:27.475986 5116 generic.go:358] "Generic (PLEG): container finished" podID="775807ff-508f-490a-ba20-c050c4b65e22" containerID="b84300e7b6edfcbb17488b025491d92bba7f1fbd687bc6bae6bb9132f08410f6" exitCode=0 Dec 09 14:27:27 crc kubenswrapper[5116]: I1209 14:27:27.476036 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" event={"ID":"775807ff-508f-490a-ba20-c050c4b65e22","Type":"ContainerDied","Data":"b84300e7b6edfcbb17488b025491d92bba7f1fbd687bc6bae6bb9132f08410f6"} Dec 09 14:27:27 crc kubenswrapper[5116]: I1209 14:27:27.476093 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" event={"ID":"775807ff-508f-490a-ba20-c050c4b65e22","Type":"ContainerStarted","Data":"9110ed72fc2421dee587da90e04366c9dbdd56138308d98715bcbe5390008b08"} Dec 09 14:27:28 crc kubenswrapper[5116]: I1209 14:27:28.483357 5116 generic.go:358] "Generic (PLEG): container finished" podID="775807ff-508f-490a-ba20-c050c4b65e22" containerID="d49e98e025e58282a2f76a828556488d3bc974fb59c7ae23e8107d5200862a2d" exitCode=0 Dec 09 14:27:28 crc kubenswrapper[5116]: I1209 14:27:28.483429 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" event={"ID":"775807ff-508f-490a-ba20-c050c4b65e22","Type":"ContainerDied","Data":"d49e98e025e58282a2f76a828556488d3bc974fb59c7ae23e8107d5200862a2d"} Dec 09 14:27:29 crc kubenswrapper[5116]: I1209 14:27:29.492080 5116 generic.go:358] "Generic (PLEG): container finished" podID="775807ff-508f-490a-ba20-c050c4b65e22" containerID="3ff76d95f3c288f833a41665850e5da2441bfefd80f66ee5874157064be9ace8" exitCode=0 Dec 09 14:27:29 crc kubenswrapper[5116]: I1209 14:27:29.492158 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" event={"ID":"775807ff-508f-490a-ba20-c050c4b65e22","Type":"ContainerDied","Data":"3ff76d95f3c288f833a41665850e5da2441bfefd80f66ee5874157064be9ace8"} Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.730726 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.906495 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gqbm\" (UniqueName: \"kubernetes.io/projected/775807ff-508f-490a-ba20-c050c4b65e22-kube-api-access-2gqbm\") pod \"775807ff-508f-490a-ba20-c050c4b65e22\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.906556 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-bundle\") pod \"775807ff-508f-490a-ba20-c050c4b65e22\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.906754 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-util\") pod \"775807ff-508f-490a-ba20-c050c4b65e22\" (UID: \"775807ff-508f-490a-ba20-c050c4b65e22\") " Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.910391 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-bundle" (OuterVolumeSpecName: "bundle") pod "775807ff-508f-490a-ba20-c050c4b65e22" (UID: "775807ff-508f-490a-ba20-c050c4b65e22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.913768 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775807ff-508f-490a-ba20-c050c4b65e22-kube-api-access-2gqbm" (OuterVolumeSpecName: "kube-api-access-2gqbm") pod "775807ff-508f-490a-ba20-c050c4b65e22" (UID: "775807ff-508f-490a-ba20-c050c4b65e22"). InnerVolumeSpecName "kube-api-access-2gqbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:30 crc kubenswrapper[5116]: I1209 14:27:30.942104 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-util" (OuterVolumeSpecName: "util") pod "775807ff-508f-490a-ba20-c050c4b65e22" (UID: "775807ff-508f-490a-ba20-c050c4b65e22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:31 crc kubenswrapper[5116]: I1209 14:27:31.008711 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gqbm\" (UniqueName: \"kubernetes.io/projected/775807ff-508f-490a-ba20-c050c4b65e22-kube-api-access-2gqbm\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:31 crc kubenswrapper[5116]: I1209 14:27:31.008762 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:31 crc kubenswrapper[5116]: I1209 14:27:31.008779 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/775807ff-508f-490a-ba20-c050c4b65e22-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:31 crc kubenswrapper[5116]: I1209 14:27:31.508180 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" event={"ID":"775807ff-508f-490a-ba20-c050c4b65e22","Type":"ContainerDied","Data":"9110ed72fc2421dee587da90e04366c9dbdd56138308d98715bcbe5390008b08"} Dec 09 14:27:31 crc kubenswrapper[5116]: I1209 14:27:31.508238 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9110ed72fc2421dee587da90e04366c9dbdd56138308d98715bcbe5390008b08" Dec 09 14:27:31 crc kubenswrapper[5116]: I1209 14:27:31.508246 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt2hb" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.693592 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn"] Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694582 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="extract" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694772 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="extract" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694793 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="pull" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694801 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="pull" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694811 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="util" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694819 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="util" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.694938 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="775807ff-508f-490a-ba20-c050c4b65e22" containerName="extract" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.725375 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn"] Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.725569 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.727577 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.778164 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8nsw\" (UniqueName: \"kubernetes.io/projected/ca43b322-baae-46bf-9bda-f41c171faf2a-kube-api-access-s8nsw\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.778220 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.778650 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.880244 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8nsw\" (UniqueName: \"kubernetes.io/projected/ca43b322-baae-46bf-9bda-f41c171faf2a-kube-api-access-s8nsw\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.880304 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.880382 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.881205 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.881201 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:35 crc kubenswrapper[5116]: I1209 14:27:35.908757 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8nsw\" (UniqueName: \"kubernetes.io/projected/ca43b322-baae-46bf-9bda-f41c171faf2a-kube-api-access-s8nsw\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.049090 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.336808 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn"] Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.483697 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd"] Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.487975 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.492871 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd"] Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.541487 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" event={"ID":"ca43b322-baae-46bf-9bda-f41c171faf2a","Type":"ContainerStarted","Data":"a582eb8ae7ea695b20ae70fa796a47bf3804197c0cc6ad21034b73c14b40cc5f"} Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.541574 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" event={"ID":"ca43b322-baae-46bf-9bda-f41c171faf2a","Type":"ContainerStarted","Data":"7ce664600a09f9f7b0cdefe45d06807a7aa05a089d734e57d2e2cbe1420d620d"} Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.592792 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fz8n\" (UniqueName: \"kubernetes.io/projected/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-kube-api-access-9fz8n\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.592905 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.592925 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.694891 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9fz8n\" (UniqueName: \"kubernetes.io/projected/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-kube-api-access-9fz8n\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.694953 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.695110 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.695409 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.695420 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.713875 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fz8n\" (UniqueName: \"kubernetes.io/projected/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-kube-api-access-9fz8n\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:36 crc kubenswrapper[5116]: I1209 14:27:36.804731 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:37 crc kubenswrapper[5116]: I1209 14:27:37.038179 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd"] Dec 09 14:27:37 crc kubenswrapper[5116]: W1209 14:27:37.044730 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff50caf_ca9a_42d3_b54e_0abf8ea31022.slice/crio-f50cf575d3661a7ed36886aac6a4c7978a33aa6f3ff504f40b29ed78aa740474 WatchSource:0}: Error finding container f50cf575d3661a7ed36886aac6a4c7978a33aa6f3ff504f40b29ed78aa740474: Status 404 returned error can't find the container with id f50cf575d3661a7ed36886aac6a4c7978a33aa6f3ff504f40b29ed78aa740474 Dec 09 14:27:37 crc kubenswrapper[5116]: I1209 14:27:37.547833 5116 generic.go:358] "Generic (PLEG): container finished" podID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerID="292b697fa5f2c13679b93d073f30b1f1158f0b3464e930799758a3d3887ba861" exitCode=0 Dec 09 14:27:37 crc kubenswrapper[5116]: I1209 14:27:37.547937 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" event={"ID":"7ff50caf-ca9a-42d3-b54e-0abf8ea31022","Type":"ContainerDied","Data":"292b697fa5f2c13679b93d073f30b1f1158f0b3464e930799758a3d3887ba861"} Dec 09 14:27:37 crc kubenswrapper[5116]: I1209 14:27:37.547981 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" event={"ID":"7ff50caf-ca9a-42d3-b54e-0abf8ea31022","Type":"ContainerStarted","Data":"f50cf575d3661a7ed36886aac6a4c7978a33aa6f3ff504f40b29ed78aa740474"} Dec 09 14:27:37 crc kubenswrapper[5116]: I1209 14:27:37.549125 5116 generic.go:358] "Generic (PLEG): container finished" podID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerID="a582eb8ae7ea695b20ae70fa796a47bf3804197c0cc6ad21034b73c14b40cc5f" exitCode=0 Dec 09 14:27:37 crc kubenswrapper[5116]: I1209 14:27:37.549190 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" event={"ID":"ca43b322-baae-46bf-9bda-f41c171faf2a","Type":"ContainerDied","Data":"a582eb8ae7ea695b20ae70fa796a47bf3804197c0cc6ad21034b73c14b40cc5f"} Dec 09 14:27:39 crc kubenswrapper[5116]: I1209 14:27:39.573380 5116 generic.go:358] "Generic (PLEG): container finished" podID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerID="73622e129335a2eff314c41f06c87d57dc39c37b50f194804daf2c6f01c5c7f0" exitCode=0 Dec 09 14:27:39 crc kubenswrapper[5116]: I1209 14:27:39.574088 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" event={"ID":"ca43b322-baae-46bf-9bda-f41c171faf2a","Type":"ContainerDied","Data":"73622e129335a2eff314c41f06c87d57dc39c37b50f194804daf2c6f01c5c7f0"} Dec 09 14:27:40 crc kubenswrapper[5116]: I1209 14:27:40.594279 5116 generic.go:358] "Generic (PLEG): container finished" podID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerID="9380dea78094c6786dfd73e9e910d5a304dc8b5bb12d99c14a2d0757d9e47d0a" exitCode=0 Dec 09 14:27:40 crc kubenswrapper[5116]: I1209 14:27:40.594485 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" event={"ID":"ca43b322-baae-46bf-9bda-f41c171faf2a","Type":"ContainerDied","Data":"9380dea78094c6786dfd73e9e910d5a304dc8b5bb12d99c14a2d0757d9e47d0a"} Dec 09 14:27:40 crc kubenswrapper[5116]: I1209 14:27:40.596360 5116 generic.go:358] "Generic (PLEG): container finished" podID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerID="e4aec0d4a98ed5a5f9221c9f0cb1cb41086873b4d97a20da79f20a3249bae445" exitCode=0 Dec 09 14:27:40 crc kubenswrapper[5116]: I1209 14:27:40.596417 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" event={"ID":"7ff50caf-ca9a-42d3-b54e-0abf8ea31022","Type":"ContainerDied","Data":"e4aec0d4a98ed5a5f9221c9f0cb1cb41086873b4d97a20da79f20a3249bae445"} Dec 09 14:27:41 crc kubenswrapper[5116]: I1209 14:27:41.602701 5116 generic.go:358] "Generic (PLEG): container finished" podID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerID="74d8066c984263d03d3e8672d5ea073c9fa059964bd302dd3ae086daaf22ac60" exitCode=0 Dec 09 14:27:41 crc kubenswrapper[5116]: I1209 14:27:41.603805 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" event={"ID":"7ff50caf-ca9a-42d3-b54e-0abf8ea31022","Type":"ContainerDied","Data":"74d8066c984263d03d3e8672d5ea073c9fa059964bd302dd3ae086daaf22ac60"} Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.016108 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.106496 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr"] Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107026 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="extract" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107041 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="extract" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107050 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="util" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107056 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="util" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107065 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="pull" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107070 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="pull" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.107181 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca43b322-baae-46bf-9bda-f41c171faf2a" containerName="extract" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.127525 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-util\") pod \"ca43b322-baae-46bf-9bda-f41c171faf2a\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.134051 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8nsw\" (UniqueName: \"kubernetes.io/projected/ca43b322-baae-46bf-9bda-f41c171faf2a-kube-api-access-s8nsw\") pod \"ca43b322-baae-46bf-9bda-f41c171faf2a\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.135082 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-bundle\") pod \"ca43b322-baae-46bf-9bda-f41c171faf2a\" (UID: \"ca43b322-baae-46bf-9bda-f41c171faf2a\") " Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.153636 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca43b322-baae-46bf-9bda-f41c171faf2a-kube-api-access-s8nsw" (OuterVolumeSpecName: "kube-api-access-s8nsw") pod "ca43b322-baae-46bf-9bda-f41c171faf2a" (UID: "ca43b322-baae-46bf-9bda-f41c171faf2a"). InnerVolumeSpecName "kube-api-access-s8nsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.157230 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-bundle" (OuterVolumeSpecName: "bundle") pod "ca43b322-baae-46bf-9bda-f41c171faf2a" (UID: "ca43b322-baae-46bf-9bda-f41c171faf2a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.168072 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-util" (OuterVolumeSpecName: "util") pod "ca43b322-baae-46bf-9bda-f41c171faf2a" (UID: "ca43b322-baae-46bf-9bda-f41c171faf2a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.236183 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.236228 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca43b322-baae-46bf-9bda-f41c171faf2a-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.236240 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8nsw\" (UniqueName: \"kubernetes.io/projected/ca43b322-baae-46bf-9bda-f41c171faf2a-kube-api-access-s8nsw\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.465190 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr"] Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.465339 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.540229 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.540432 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.540520 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqmxq\" (UniqueName: \"kubernetes.io/projected/7d859bc1-48d2-4a28-a665-567a8901e800-kube-api-access-lqmxq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.610478 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.610490 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egv9tn" event={"ID":"ca43b322-baae-46bf-9bda-f41c171faf2a","Type":"ContainerDied","Data":"7ce664600a09f9f7b0cdefe45d06807a7aa05a089d734e57d2e2cbe1420d620d"} Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.610541 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce664600a09f9f7b0cdefe45d06807a7aa05a089d734e57d2e2cbe1420d620d" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.645049 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.645106 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqmxq\" (UniqueName: \"kubernetes.io/projected/7d859bc1-48d2-4a28-a665-567a8901e800-kube-api-access-lqmxq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.645136 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.645570 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.645782 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.664645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqmxq\" (UniqueName: \"kubernetes.io/projected/7d859bc1-48d2-4a28-a665-567a8901e800-kube-api-access-lqmxq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.777783 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:42 crc kubenswrapper[5116]: I1209 14:27:42.926718 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.055462 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-util\") pod \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.055868 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-bundle\") pod \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.055913 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fz8n\" (UniqueName: \"kubernetes.io/projected/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-kube-api-access-9fz8n\") pod \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\" (UID: \"7ff50caf-ca9a-42d3-b54e-0abf8ea31022\") " Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.056559 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-bundle" (OuterVolumeSpecName: "bundle") pod "7ff50caf-ca9a-42d3-b54e-0abf8ea31022" (UID: "7ff50caf-ca9a-42d3-b54e-0abf8ea31022"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.062130 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-kube-api-access-9fz8n" (OuterVolumeSpecName: "kube-api-access-9fz8n") pod "7ff50caf-ca9a-42d3-b54e-0abf8ea31022" (UID: "7ff50caf-ca9a-42d3-b54e-0abf8ea31022"). InnerVolumeSpecName "kube-api-access-9fz8n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.066718 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-util" (OuterVolumeSpecName: "util") pod "7ff50caf-ca9a-42d3-b54e-0abf8ea31022" (UID: "7ff50caf-ca9a-42d3-b54e-0abf8ea31022"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.086873 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr"] Dec 09 14:27:43 crc kubenswrapper[5116]: W1209 14:27:43.089160 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d859bc1_48d2_4a28_a665_567a8901e800.slice/crio-7f10931cf11f2d0cecb0dcd9b0071353faff8b085ac75ad0d12f43d28c228ae8 WatchSource:0}: Error finding container 7f10931cf11f2d0cecb0dcd9b0071353faff8b085ac75ad0d12f43d28c228ae8: Status 404 returned error can't find the container with id 7f10931cf11f2d0cecb0dcd9b0071353faff8b085ac75ad0d12f43d28c228ae8 Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.156830 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.156870 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.156881 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9fz8n\" (UniqueName: \"kubernetes.io/projected/7ff50caf-ca9a-42d3-b54e-0abf8ea31022-kube-api-access-9fz8n\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.617260 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" event={"ID":"7ff50caf-ca9a-42d3-b54e-0abf8ea31022","Type":"ContainerDied","Data":"f50cf575d3661a7ed36886aac6a4c7978a33aa6f3ff504f40b29ed78aa740474"} Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.617508 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50cf575d3661a7ed36886aac6a4c7978a33aa6f3ff504f40b29ed78aa740474" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.617319 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgrhd" Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.618627 5116 generic.go:358] "Generic (PLEG): container finished" podID="7d859bc1-48d2-4a28-a665-567a8901e800" containerID="87454317a8716ad7634f8261f50194a80a2fc77bfff831b3b2afa2d1764232a9" exitCode=0 Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.618743 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" event={"ID":"7d859bc1-48d2-4a28-a665-567a8901e800","Type":"ContainerDied","Data":"87454317a8716ad7634f8261f50194a80a2fc77bfff831b3b2afa2d1764232a9"} Dec 09 14:27:43 crc kubenswrapper[5116]: I1209 14:27:43.618769 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" event={"ID":"7d859bc1-48d2-4a28-a665-567a8901e800","Type":"ContainerStarted","Data":"7f10931cf11f2d0cecb0dcd9b0071353faff8b085ac75ad0d12f43d28c228ae8"} Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.472315 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-qm9cn" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.540542 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-gsv2s"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.747984 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-zvmqv"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773671 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="pull" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773710 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="pull" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773723 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="util" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773730 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="util" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773761 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="extract" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773769 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="extract" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.773887 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="7ff50caf-ca9a-42d3-b54e-0abf8ea31022" containerName="extract" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.799435 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-zvmqv"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.799580 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.813998 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.814264 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-fvs46\"" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.814560 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.821847 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.833318 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.834138 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.846820 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.847045 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-jdv9z\"" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.847252 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.847285 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.847395 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.916529 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47f9ca7e-83ae-4e18-b85b-a27431e57da2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct\" (UID: \"47f9ca7e-83ae-4e18-b85b-a27431e57da2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.916582 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c658708-587d-4718-8a63-f899c8627817-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9\" (UID: \"7c658708-587d-4718-8a63-f899c8627817\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.916607 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47f9ca7e-83ae-4e18-b85b-a27431e57da2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct\" (UID: \"47f9ca7e-83ae-4e18-b85b-a27431e57da2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.916633 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hp29\" (UniqueName: \"kubernetes.io/projected/3e4f8681-97be-4809-908b-723c09d87ab1-kube-api-access-7hp29\") pod \"obo-prometheus-operator-86648f486b-zvmqv\" (UID: \"3e4f8681-97be-4809-908b-723c09d87ab1\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.916700 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c658708-587d-4718-8a63-f899c8627817-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9\" (UID: \"7c658708-587d-4718-8a63-f899c8627817\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.963595 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-q6bp4"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.972884 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.975456 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-q6bp4"] Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.977710 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-9d85d\"" Dec 09 14:27:46 crc kubenswrapper[5116]: I1209 14:27:46.977896 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.017908 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47f9ca7e-83ae-4e18-b85b-a27431e57da2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct\" (UID: \"47f9ca7e-83ae-4e18-b85b-a27431e57da2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.018002 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c658708-587d-4718-8a63-f899c8627817-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9\" (UID: \"7c658708-587d-4718-8a63-f899c8627817\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.018030 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47f9ca7e-83ae-4e18-b85b-a27431e57da2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct\" (UID: \"47f9ca7e-83ae-4e18-b85b-a27431e57da2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.018066 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7hp29\" (UniqueName: \"kubernetes.io/projected/3e4f8681-97be-4809-908b-723c09d87ab1-kube-api-access-7hp29\") pod \"obo-prometheus-operator-86648f486b-zvmqv\" (UID: \"3e4f8681-97be-4809-908b-723c09d87ab1\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.018093 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/056c1db2-79be-483a-9928-8277c5626641-observability-operator-tls\") pod \"observability-operator-78c97476f4-q6bp4\" (UID: \"056c1db2-79be-483a-9928-8277c5626641\") " pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.018123 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c658708-587d-4718-8a63-f899c8627817-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9\" (UID: \"7c658708-587d-4718-8a63-f899c8627817\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.018149 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj57\" (UniqueName: \"kubernetes.io/projected/056c1db2-79be-483a-9928-8277c5626641-kube-api-access-5gj57\") pod \"observability-operator-78c97476f4-q6bp4\" (UID: \"056c1db2-79be-483a-9928-8277c5626641\") " pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.024638 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c658708-587d-4718-8a63-f899c8627817-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9\" (UID: \"7c658708-587d-4718-8a63-f899c8627817\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.025025 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47f9ca7e-83ae-4e18-b85b-a27431e57da2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct\" (UID: \"47f9ca7e-83ae-4e18-b85b-a27431e57da2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.030394 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c658708-587d-4718-8a63-f899c8627817-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9\" (UID: \"7c658708-587d-4718-8a63-f899c8627817\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.032991 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47f9ca7e-83ae-4e18-b85b-a27431e57da2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct\" (UID: \"47f9ca7e-83ae-4e18-b85b-a27431e57da2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.035485 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hp29\" (UniqueName: \"kubernetes.io/projected/3e4f8681-97be-4809-908b-723c09d87ab1-kube-api-access-7hp29\") pod \"obo-prometheus-operator-86648f486b-zvmqv\" (UID: \"3e4f8681-97be-4809-908b-723c09d87ab1\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.073950 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-ngsws"] Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.080599 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.083979 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-ngsws"] Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.084979 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-ds7pp\"" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.119623 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gj57\" (UniqueName: \"kubernetes.io/projected/056c1db2-79be-483a-9928-8277c5626641-kube-api-access-5gj57\") pod \"observability-operator-78c97476f4-q6bp4\" (UID: \"056c1db2-79be-483a-9928-8277c5626641\") " pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.119686 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe557fa-51b2-4bae-b84f-bb73ba070e72-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-ngsws\" (UID: \"fbe557fa-51b2-4bae-b84f-bb73ba070e72\") " pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.119807 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh2wd\" (UniqueName: \"kubernetes.io/projected/fbe557fa-51b2-4bae-b84f-bb73ba070e72-kube-api-access-fh2wd\") pod \"perses-operator-68bdb49cbf-ngsws\" (UID: \"fbe557fa-51b2-4bae-b84f-bb73ba070e72\") " pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.119878 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/056c1db2-79be-483a-9928-8277c5626641-observability-operator-tls\") pod \"observability-operator-78c97476f4-q6bp4\" (UID: \"056c1db2-79be-483a-9928-8277c5626641\") " pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.128642 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/056c1db2-79be-483a-9928-8277c5626641-observability-operator-tls\") pod \"observability-operator-78c97476f4-q6bp4\" (UID: \"056c1db2-79be-483a-9928-8277c5626641\") " pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.141541 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gj57\" (UniqueName: \"kubernetes.io/projected/056c1db2-79be-483a-9928-8277c5626641-kube-api-access-5gj57\") pod \"observability-operator-78c97476f4-q6bp4\" (UID: \"056c1db2-79be-483a-9928-8277c5626641\") " pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.149963 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.165688 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.173257 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.221563 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fh2wd\" (UniqueName: \"kubernetes.io/projected/fbe557fa-51b2-4bae-b84f-bb73ba070e72-kube-api-access-fh2wd\") pod \"perses-operator-68bdb49cbf-ngsws\" (UID: \"fbe557fa-51b2-4bae-b84f-bb73ba070e72\") " pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.221986 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe557fa-51b2-4bae-b84f-bb73ba070e72-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-ngsws\" (UID: \"fbe557fa-51b2-4bae-b84f-bb73ba070e72\") " pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.222841 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe557fa-51b2-4bae-b84f-bb73ba070e72-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-ngsws\" (UID: \"fbe557fa-51b2-4bae-b84f-bb73ba070e72\") " pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.249750 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh2wd\" (UniqueName: \"kubernetes.io/projected/fbe557fa-51b2-4bae-b84f-bb73ba070e72-kube-api-access-fh2wd\") pod \"perses-operator-68bdb49cbf-ngsws\" (UID: \"fbe557fa-51b2-4bae-b84f-bb73ba070e72\") " pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.293503 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:27:47 crc kubenswrapper[5116]: I1209 14:27:47.395844 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.282171 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9"] Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.458493 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-ngsws"] Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.467947 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-q6bp4"] Dec 09 14:27:48 crc kubenswrapper[5116]: W1209 14:27:48.472585 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbe557fa_51b2_4bae_b84f_bb73ba070e72.slice/crio-19600f87b6b11efc11a6aef1eec621c4eef1ce2784a7f9f227f683d22a52143a WatchSource:0}: Error finding container 19600f87b6b11efc11a6aef1eec621c4eef1ce2784a7f9f227f683d22a52143a: Status 404 returned error can't find the container with id 19600f87b6b11efc11a6aef1eec621c4eef1ce2784a7f9f227f683d22a52143a Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.575079 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-zvmqv"] Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.580869 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct"] Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.700040 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" event={"ID":"7c658708-587d-4718-8a63-f899c8627817","Type":"ContainerStarted","Data":"383d112979479ed76ab6462f1c5bba778c5395e1fab3a5dae828cec73beab8f4"} Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.701044 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" event={"ID":"3e4f8681-97be-4809-908b-723c09d87ab1","Type":"ContainerStarted","Data":"b453e15dc0ee42fff5b79ee655aa1b0f9b5504b3e624b20b3011c8521c1337fe"} Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.703031 5116 generic.go:358] "Generic (PLEG): container finished" podID="7d859bc1-48d2-4a28-a665-567a8901e800" containerID="269700491cffd69cb3217f5b31b129fd6d74e767075b0d6f3d1b99fdd4942acb" exitCode=0 Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.703108 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" event={"ID":"7d859bc1-48d2-4a28-a665-567a8901e800","Type":"ContainerDied","Data":"269700491cffd69cb3217f5b31b129fd6d74e767075b0d6f3d1b99fdd4942acb"} Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.704229 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" event={"ID":"fbe557fa-51b2-4bae-b84f-bb73ba070e72","Type":"ContainerStarted","Data":"19600f87b6b11efc11a6aef1eec621c4eef1ce2784a7f9f227f683d22a52143a"} Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.705633 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" event={"ID":"47f9ca7e-83ae-4e18-b85b-a27431e57da2","Type":"ContainerStarted","Data":"91ed8cab201f0c93d33fb1a3a4423f9c06ae952e46322a8f123681862bcd9f80"} Dec 09 14:27:48 crc kubenswrapper[5116]: I1209 14:27:48.709427 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" event={"ID":"056c1db2-79be-483a-9928-8277c5626641","Type":"ContainerStarted","Data":"4f8c2ed1f6b4c1ea1c52eaaa233b3d4292fbb9fe49aed8dbf9a64373e4aeb2b5"} Dec 09 14:27:49 crc kubenswrapper[5116]: I1209 14:27:49.728058 5116 generic.go:358] "Generic (PLEG): container finished" podID="7d859bc1-48d2-4a28-a665-567a8901e800" containerID="ae8ac5dd7a77e38ea089b957e7e1367f613f03b85bc893d4a7a35357346d742a" exitCode=0 Dec 09 14:27:49 crc kubenswrapper[5116]: I1209 14:27:49.728352 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" event={"ID":"7d859bc1-48d2-4a28-a665-567a8901e800","Type":"ContainerDied","Data":"ae8ac5dd7a77e38ea089b957e7e1367f613f03b85bc893d4a7a35357346d742a"} Dec 09 14:27:50 crc kubenswrapper[5116]: I1209 14:27:50.899113 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-kxcdr"] Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.149219 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.195760 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-util\") pod \"7d859bc1-48d2-4a28-a665-567a8901e800\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.195797 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqmxq\" (UniqueName: \"kubernetes.io/projected/7d859bc1-48d2-4a28-a665-567a8901e800-kube-api-access-lqmxq\") pod \"7d859bc1-48d2-4a28-a665-567a8901e800\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.195849 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-bundle\") pod \"7d859bc1-48d2-4a28-a665-567a8901e800\" (UID: \"7d859bc1-48d2-4a28-a665-567a8901e800\") " Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.197812 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-bundle" (OuterVolumeSpecName: "bundle") pod "7d859bc1-48d2-4a28-a665-567a8901e800" (UID: "7d859bc1-48d2-4a28-a665-567a8901e800"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.207143 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-util" (OuterVolumeSpecName: "util") pod "7d859bc1-48d2-4a28-a665-567a8901e800" (UID: "7d859bc1-48d2-4a28-a665-567a8901e800"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.233847 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d859bc1-48d2-4a28-a665-567a8901e800-kube-api-access-lqmxq" (OuterVolumeSpecName: "kube-api-access-lqmxq") pod "7d859bc1-48d2-4a28-a665-567a8901e800" (UID: "7d859bc1-48d2-4a28-a665-567a8901e800"). InnerVolumeSpecName "kube-api-access-lqmxq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.296766 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.296798 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqmxq\" (UniqueName: \"kubernetes.io/projected/7d859bc1-48d2-4a28-a665-567a8901e800-kube-api-access-lqmxq\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.296824 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7d859bc1-48d2-4a28-a665-567a8901e800-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.549542 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-kxcdr"] Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.549732 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.552980 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.553409 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-8dmgn\"" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.553628 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.609933 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vjf\" (UniqueName: \"kubernetes.io/projected/303a2328-13e2-4e67-b48f-b215c4b070bf-kube-api-access-b9vjf\") pod \"interconnect-operator-78b9bd8798-kxcdr\" (UID: \"303a2328-13e2-4e67-b48f-b215c4b070bf\") " pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.711131 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vjf\" (UniqueName: \"kubernetes.io/projected/303a2328-13e2-4e67-b48f-b215c4b070bf-kube-api-access-b9vjf\") pod \"interconnect-operator-78b9bd8798-kxcdr\" (UID: \"303a2328-13e2-4e67-b48f-b215c4b070bf\") " pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.733212 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vjf\" (UniqueName: \"kubernetes.io/projected/303a2328-13e2-4e67-b48f-b215c4b070bf-kube-api-access-b9vjf\") pod \"interconnect-operator-78b9bd8798-kxcdr\" (UID: \"303a2328-13e2-4e67-b48f-b215c4b070bf\") " pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.792893 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" event={"ID":"7d859bc1-48d2-4a28-a665-567a8901e800","Type":"ContainerDied","Data":"7f10931cf11f2d0cecb0dcd9b0071353faff8b085ac75ad0d12f43d28c228ae8"} Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.792935 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f10931cf11f2d0cecb0dcd9b0071353faff8b085ac75ad0d12f43d28c228ae8" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.793107 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2nqvr" Dec 09 14:27:51 crc kubenswrapper[5116]: I1209 14:27:51.924657 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" Dec 09 14:27:52 crc kubenswrapper[5116]: I1209 14:27:52.167943 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:27:52 crc kubenswrapper[5116]: I1209 14:27:52.168025 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:27:52 crc kubenswrapper[5116]: I1209 14:27:52.888635 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-kxcdr"] Dec 09 14:27:52 crc kubenswrapper[5116]: W1209 14:27:52.903233 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod303a2328_13e2_4e67_b48f_b215c4b070bf.slice/crio-6d0aba5d734977f06fc072141ce2d2db6f225608983537e319fbc400e6a64d45 WatchSource:0}: Error finding container 6d0aba5d734977f06fc072141ce2d2db6f225608983537e319fbc400e6a64d45: Status 404 returned error can't find the container with id 6d0aba5d734977f06fc072141ce2d2db6f225608983537e319fbc400e6a64d45 Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.705617 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-84c84fffb6-tt44c"] Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706364 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="pull" Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706378 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="pull" Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706423 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="util" Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706431 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="util" Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706449 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="extract" Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706459 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="extract" Dec 09 14:27:53 crc kubenswrapper[5116]: I1209 14:27:53.706591 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d859bc1-48d2-4a28-a665-567a8901e800" containerName="extract" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.272247 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.276068 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.276449 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-fl55t\"" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.285371 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" event={"ID":"303a2328-13e2-4e67-b48f-b215c4b070bf","Type":"ContainerStarted","Data":"6d0aba5d734977f06fc072141ce2d2db6f225608983537e319fbc400e6a64d45"} Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.285423 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-84c84fffb6-tt44c"] Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.374629 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-apiservice-cert\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.375337 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995q9\" (UniqueName: \"kubernetes.io/projected/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-kube-api-access-995q9\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.375385 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-webhook-cert\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.476832 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-apiservice-cert\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.477134 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-995q9\" (UniqueName: \"kubernetes.io/projected/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-kube-api-access-995q9\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.477170 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-webhook-cert\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.484308 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-webhook-cert\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.485706 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-apiservice-cert\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.496630 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-995q9\" (UniqueName: \"kubernetes.io/projected/1ba59cc9-0a51-4cd6-979d-47485b9a2a0b-kube-api-access-995q9\") pod \"elastic-operator-84c84fffb6-tt44c\" (UID: \"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b\") " pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.596259 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" Dec 09 14:27:54 crc kubenswrapper[5116]: I1209 14:27:54.890323 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-84c84fffb6-tt44c"] Dec 09 14:27:55 crc kubenswrapper[5116]: I1209 14:27:55.896478 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" event={"ID":"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b","Type":"ContainerStarted","Data":"eb6a665fe48b93f61c8ba5f021b13fb07c95dfd078e8c2d6f819881e931a6936"} Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.028173 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv"] Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.038137 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.044302 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.044386 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.044502 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-xms5p\"" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.063197 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv"] Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.181718 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkz5t\" (UniqueName: \"kubernetes.io/projected/bad5001d-ec6c-4d68-a762-1ec53a1252f1-kube-api-access-lkz5t\") pod \"cert-manager-operator-controller-manager-64c74584c4-tpvkv\" (UID: \"bad5001d-ec6c-4d68-a762-1ec53a1252f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.181846 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bad5001d-ec6c-4d68-a762-1ec53a1252f1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-tpvkv\" (UID: \"bad5001d-ec6c-4d68-a762-1ec53a1252f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.283881 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lkz5t\" (UniqueName: \"kubernetes.io/projected/bad5001d-ec6c-4d68-a762-1ec53a1252f1-kube-api-access-lkz5t\") pod \"cert-manager-operator-controller-manager-64c74584c4-tpvkv\" (UID: \"bad5001d-ec6c-4d68-a762-1ec53a1252f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.284012 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bad5001d-ec6c-4d68-a762-1ec53a1252f1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-tpvkv\" (UID: \"bad5001d-ec6c-4d68-a762-1ec53a1252f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.285268 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bad5001d-ec6c-4d68-a762-1ec53a1252f1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-tpvkv\" (UID: \"bad5001d-ec6c-4d68-a762-1ec53a1252f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.323480 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkz5t\" (UniqueName: \"kubernetes.io/projected/bad5001d-ec6c-4d68-a762-1ec53a1252f1-kube-api-access-lkz5t\") pod \"cert-manager-operator-controller-manager-64c74584c4-tpvkv\" (UID: \"bad5001d-ec6c-4d68-a762-1ec53a1252f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.354010 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" Dec 09 14:28:11 crc kubenswrapper[5116]: I1209 14:28:11.578293 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" podUID="21d797a0-87da-4b3e-8322-c8f27f9bb2d4" containerName="registry" containerID="cri-o://25c4a44165183ffd747cda1b6ad586d77399e9b686438b257d71b29caf4a054b" gracePeriod=30 Dec 09 14:28:12 crc kubenswrapper[5116]: I1209 14:28:12.138282 5116 generic.go:358] "Generic (PLEG): container finished" podID="21d797a0-87da-4b3e-8322-c8f27f9bb2d4" containerID="25c4a44165183ffd747cda1b6ad586d77399e9b686438b257d71b29caf4a054b" exitCode=0 Dec 09 14:28:12 crc kubenswrapper[5116]: I1209 14:28:12.138518 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" event={"ID":"21d797a0-87da-4b3e-8322-c8f27f9bb2d4","Type":"ContainerDied","Data":"25c4a44165183ffd747cda1b6ad586d77399e9b686438b257d71b29caf4a054b"} Dec 09 14:28:13 crc kubenswrapper[5116]: I1209 14:28:13.899161 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:28:13 crc kubenswrapper[5116]: I1209 14:28:13.902303 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv"] Dec 09 14:28:13 crc kubenswrapper[5116]: W1209 14:28:13.936634 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad5001d_ec6c_4d68_a762_1ec53a1252f1.slice/crio-401679dfd641f0366607b8df0b6a65c105e1d40959d71623ad7bd82931e88475 WatchSource:0}: Error finding container 401679dfd641f0366607b8df0b6a65c105e1d40959d71623ad7bd82931e88475: Status 404 returned error can't find the container with id 401679dfd641f0366607b8df0b6a65c105e1d40959d71623ad7bd82931e88475 Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071426 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071712 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-tls\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071765 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-installation-pull-secrets\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071810 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-ca-trust-extracted\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071839 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-certificates\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071881 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x99pj\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-kube-api-access-x99pj\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071929 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-bound-sa-token\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.071944 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-trusted-ca\") pod \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\" (UID: \"21d797a0-87da-4b3e-8322-c8f27f9bb2d4\") " Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.072786 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.089298 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.096347 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.096930 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.097456 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.109371 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.110244 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-kube-api-access-x99pj" (OuterVolumeSpecName: "kube-api-access-x99pj") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "kube-api-access-x99pj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.149191 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" event={"ID":"fbe557fa-51b2-4bae-b84f-bb73ba070e72","Type":"ContainerStarted","Data":"52321697ef7b541e396eb787b489e544d108e842b90babd023ea0041867b546b"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.150140 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.151407 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" event={"ID":"bad5001d-ec6c-4d68-a762-1ec53a1252f1","Type":"ContainerStarted","Data":"401679dfd641f0366607b8df0b6a65c105e1d40959d71623ad7bd82931e88475"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.153226 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" event={"ID":"47f9ca7e-83ae-4e18-b85b-a27431e57da2","Type":"ContainerStarted","Data":"4aa720094b5e24757764628d2e440317e670bd1db6ba9b70a1894520a2c72e10"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.154493 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" event={"ID":"056c1db2-79be-483a-9928-8277c5626641","Type":"ContainerStarted","Data":"1efc02931db5750ae092258cb6c8ca73dd4b975120b744c42ea80e01141c5cf8"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.155361 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.156006 5116 patch_prober.go:28] interesting pod/observability-operator-78c97476f4-q6bp4 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": dial tcp 10.217.0.50:8081: connect: connection refused" start-of-body= Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.156046 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" podUID="056c1db2-79be-483a-9928-8277c5626641" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": dial tcp 10.217.0.50:8081: connect: connection refused" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.156371 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" event={"ID":"7c658708-587d-4718-8a63-f899c8627817","Type":"ContainerStarted","Data":"9b3eac115a536025e3056aca08c940f32b44fc76057aba5274de83ff3da0cfdd"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.157590 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" event={"ID":"303a2328-13e2-4e67-b48f-b215c4b070bf","Type":"ContainerStarted","Data":"2e416fc3fd0ae5f0699fdab4e8e8f58ece63eab3c014743a6cff5201aeb86e7d"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.159354 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" event={"ID":"1ba59cc9-0a51-4cd6-979d-47485b9a2a0b","Type":"ContainerStarted","Data":"066bfb12a237767bb271c6b11f35832a1fdfefd7556ae6b59bd9fb4178628b10"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.160737 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" event={"ID":"21d797a0-87da-4b3e-8322-c8f27f9bb2d4","Type":"ContainerDied","Data":"f3a9814b83eb60079abff009b32f25e526dec863a0b3a1d4c69af18835238559"} Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.160772 5116 scope.go:117] "RemoveContainer" containerID="25c4a44165183ffd747cda1b6ad586d77399e9b686438b257d71b29caf4a054b" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.160876 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-gsv2s" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196432 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x99pj\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-kube-api-access-x99pj\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196461 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196472 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196483 5116 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196493 5116 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196506 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.196516 5116 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/21d797a0-87da-4b3e-8322-c8f27f9bb2d4-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.272632 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "21d797a0-87da-4b3e-8322-c8f27f9bb2d4" (UID: "21d797a0-87da-4b3e-8322-c8f27f9bb2d4"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.283397 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" podStartSLOduration=2.054132626 podStartE2EDuration="27.283372343s" podCreationTimestamp="2025-12-09 14:27:47 +0000 UTC" firstStartedPulling="2025-12-09 14:27:48.476697577 +0000 UTC m=+806.998442375" lastFinishedPulling="2025-12-09 14:28:13.705937294 +0000 UTC m=+832.227682092" observedRunningTime="2025-12-09 14:28:14.275348484 +0000 UTC m=+832.797093282" watchObservedRunningTime="2025-12-09 14:28:14.283372343 +0000 UTC m=+832.805117151" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.299550 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-jmjw9" podStartSLOduration=2.84860984 podStartE2EDuration="28.299532354s" podCreationTimestamp="2025-12-09 14:27:46 +0000 UTC" firstStartedPulling="2025-12-09 14:27:48.322223241 +0000 UTC m=+806.843968039" lastFinishedPulling="2025-12-09 14:28:13.773145755 +0000 UTC m=+832.294890553" observedRunningTime="2025-12-09 14:28:14.297900432 +0000 UTC m=+832.819645230" watchObservedRunningTime="2025-12-09 14:28:14.299532354 +0000 UTC m=+832.821277152" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.382110 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" podStartSLOduration=3.294242225 podStartE2EDuration="28.382093116s" podCreationTimestamp="2025-12-09 14:27:46 +0000 UTC" firstStartedPulling="2025-12-09 14:27:48.53473865 +0000 UTC m=+807.056483448" lastFinishedPulling="2025-12-09 14:28:13.622589541 +0000 UTC m=+832.144334339" observedRunningTime="2025-12-09 14:28:14.381360597 +0000 UTC m=+832.903105405" watchObservedRunningTime="2025-12-09 14:28:14.382093116 +0000 UTC m=+832.903837914" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.412231 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-kxcdr" podStartSLOduration=3.596754809 podStartE2EDuration="24.412215611s" podCreationTimestamp="2025-12-09 14:27:50 +0000 UTC" firstStartedPulling="2025-12-09 14:27:52.90651568 +0000 UTC m=+811.428260478" lastFinishedPulling="2025-12-09 14:28:13.721976482 +0000 UTC m=+832.243721280" observedRunningTime="2025-12-09 14:28:14.406356239 +0000 UTC m=+832.928101037" watchObservedRunningTime="2025-12-09 14:28:14.412215611 +0000 UTC m=+832.933960409" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.443162 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f699f6bcd-z4qct" podStartSLOduration=3.343426706 podStartE2EDuration="28.443146747s" podCreationTimestamp="2025-12-09 14:27:46 +0000 UTC" firstStartedPulling="2025-12-09 14:27:48.606260334 +0000 UTC m=+807.128005122" lastFinishedPulling="2025-12-09 14:28:13.705980365 +0000 UTC m=+832.227725163" observedRunningTime="2025-12-09 14:28:14.430911569 +0000 UTC m=+832.952656377" watchObservedRunningTime="2025-12-09 14:28:14.443146747 +0000 UTC m=+832.964891545" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.462421 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-84c84fffb6-tt44c" podStartSLOduration=2.732559053 podStartE2EDuration="21.462400799s" podCreationTimestamp="2025-12-09 14:27:53 +0000 UTC" firstStartedPulling="2025-12-09 14:27:54.912030958 +0000 UTC m=+813.433775756" lastFinishedPulling="2025-12-09 14:28:13.641872714 +0000 UTC m=+832.163617502" observedRunningTime="2025-12-09 14:28:14.457610524 +0000 UTC m=+832.979355342" watchObservedRunningTime="2025-12-09 14:28:14.462400799 +0000 UTC m=+832.984145597" Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.505208 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-gsv2s"] Dec 09 14:28:14 crc kubenswrapper[5116]: I1209 14:28:14.510364 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-gsv2s"] Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.171861 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" event={"ID":"3e4f8681-97be-4809-908b-723c09d87ab1","Type":"ContainerStarted","Data":"0d2a928e5391f0f5252110dfc04d883bb4345884e2741cfbce0934da1f7243c0"} Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.193803 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-zvmqv" podStartSLOduration=4.093230397 podStartE2EDuration="29.19378129s" podCreationTimestamp="2025-12-09 14:27:46 +0000 UTC" firstStartedPulling="2025-12-09 14:27:48.605850723 +0000 UTC m=+807.127595511" lastFinishedPulling="2025-12-09 14:28:13.706401606 +0000 UTC m=+832.228146404" observedRunningTime="2025-12-09 14:28:15.187906457 +0000 UTC m=+833.709651265" watchObservedRunningTime="2025-12-09 14:28:15.19378129 +0000 UTC m=+833.715526088" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.240782 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-q6bp4" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.471619 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.472168 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21d797a0-87da-4b3e-8322-c8f27f9bb2d4" containerName="registry" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.472186 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d797a0-87da-4b3e-8322-c8f27f9bb2d4" containerName="registry" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.472296 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="21d797a0-87da-4b3e-8322-c8f27f9bb2d4" containerName="registry" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.497253 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.497419 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.499442 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.502291 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.502425 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.502757 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.503146 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.503261 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-rwcnx\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.503351 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.503423 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.517240 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518324 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/5f5d7fc6-3756-4d60-832e-45aace6c972d-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518383 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518481 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518524 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518544 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518564 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518584 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518601 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518618 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518638 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518656 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518678 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518698 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518732 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.518751 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.619978 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/5f5d7fc6-3756-4d60-832e-45aace6c972d-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620044 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620087 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620116 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620137 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620164 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620189 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620219 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620498 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620551 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620579 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620609 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620641 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620697 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620725 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620848 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.620929 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.621187 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.621234 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.621451 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.621551 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f5d7fc6-3756-4d60-832e-45aace6c972d-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.622250 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.622974 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.626465 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.626466 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.627563 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/5f5d7fc6-3756-4d60-832e-45aace6c972d-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.627677 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.627689 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.628619 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.631683 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/5f5d7fc6-3756-4d60-832e-45aace6c972d-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"5f5d7fc6-3756-4d60-832e-45aace6c972d\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.757964 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d797a0-87da-4b3e-8322-c8f27f9bb2d4" path="/var/lib/kubelet/pods/21d797a0-87da-4b3e-8322-c8f27f9bb2d4/volumes" Dec 09 14:28:15 crc kubenswrapper[5116]: I1209 14:28:15.813671 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:16 crc kubenswrapper[5116]: I1209 14:28:16.330102 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:28:17 crc kubenswrapper[5116]: W1209 14:28:17.317527 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5d7fc6_3756_4d60_832e_45aace6c972d.slice/crio-ce2a2e56987810b07ce8d5d3847338221e7556b3e08829b77a51216f419cb6ba WatchSource:0}: Error finding container ce2a2e56987810b07ce8d5d3847338221e7556b3e08829b77a51216f419cb6ba: Status 404 returned error can't find the container with id ce2a2e56987810b07ce8d5d3847338221e7556b3e08829b77a51216f419cb6ba Dec 09 14:28:18 crc kubenswrapper[5116]: I1209 14:28:18.199879 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"5f5d7fc6-3756-4d60-832e-45aace6c972d","Type":"ContainerStarted","Data":"ce2a2e56987810b07ce8d5d3847338221e7556b3e08829b77a51216f419cb6ba"} Dec 09 14:28:18 crc kubenswrapper[5116]: I1209 14:28:18.201799 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" event={"ID":"bad5001d-ec6c-4d68-a762-1ec53a1252f1","Type":"ContainerStarted","Data":"580cf1374aac2904ec4c905d34caafb2c8f26075d8a954cbd131872e65b552d4"} Dec 09 14:28:18 crc kubenswrapper[5116]: I1209 14:28:18.222404 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tpvkv" podStartSLOduration=3.785614453 podStartE2EDuration="7.222389994s" podCreationTimestamp="2025-12-09 14:28:11 +0000 UTC" firstStartedPulling="2025-12-09 14:28:13.939264145 +0000 UTC m=+832.461008943" lastFinishedPulling="2025-12-09 14:28:17.376039676 +0000 UTC m=+835.897784484" observedRunningTime="2025-12-09 14:28:18.22145758 +0000 UTC m=+836.743202378" watchObservedRunningTime="2025-12-09 14:28:18.222389994 +0000 UTC m=+836.744134792" Dec 09 14:28:21 crc kubenswrapper[5116]: I1209 14:28:21.143084 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl"] Dec 09 14:28:22 crc kubenswrapper[5116]: I1209 14:28:22.166517 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:28:22 crc kubenswrapper[5116]: I1209 14:28:22.166740 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.170654 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl"] Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.170710 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f"] Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.298395 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxsj\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.298874 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.399855 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glxsj\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.399912 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:25 crc kubenswrapper[5116]: E1209 14:28:25.412732 5116 projected.go:289] Couldn't get configMap cert-manager/kube-root-ca.crt: object "cert-manager"/"kube-root-ca.crt" not registered Dec 09 14:28:25 crc kubenswrapper[5116]: E1209 14:28:25.412794 5116 projected.go:289] Couldn't get configMap cert-manager/openshift-service-ca.crt: object "cert-manager"/"openshift-service-ca.crt" not registered Dec 09 14:28:25 crc kubenswrapper[5116]: E1209 14:28:25.412812 5116 projected.go:194] Error preparing data for projected volume kube-api-access-glxsj for pod cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl: [object "cert-manager"/"kube-root-ca.crt" not registered, object "cert-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:28:25 crc kubenswrapper[5116]: E1209 14:28:25.412935 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj podName:6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d nodeName:}" failed. No retries permitted until 2025-12-09 14:28:25.912900367 +0000 UTC m=+844.434645165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-glxsj" (UniqueName: "kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj") pod "cert-manager-webhook-7894b5b9b4-t9hvl" (UID: "6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d") : [object "cert-manager"/"kube-root-ca.crt" not registered, object "cert-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:28:25 crc kubenswrapper[5116]: I1209 14:28:25.425871 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.008446 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glxsj\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:26 crc kubenswrapper[5116]: E1209 14:28:26.008598 5116 projected.go:289] Couldn't get configMap cert-manager/kube-root-ca.crt: object "cert-manager"/"kube-root-ca.crt" not registered Dec 09 14:28:26 crc kubenswrapper[5116]: E1209 14:28:26.008617 5116 projected.go:289] Couldn't get configMap cert-manager/openshift-service-ca.crt: object "cert-manager"/"openshift-service-ca.crt" not registered Dec 09 14:28:26 crc kubenswrapper[5116]: E1209 14:28:26.008627 5116 projected.go:194] Error preparing data for projected volume kube-api-access-glxsj for pod cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl: [object "cert-manager"/"kube-root-ca.crt" not registered, object "cert-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:28:26 crc kubenswrapper[5116]: E1209 14:28:26.008712 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj podName:6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d nodeName:}" failed. No retries permitted until 2025-12-09 14:28:27.008698424 +0000 UTC m=+845.530443212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-glxsj" (UniqueName: "kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj") pod "cert-manager-webhook-7894b5b9b4-t9hvl" (UID: "6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d") : [object "cert-manager"/"kube-root-ca.crt" not registered, object "cert-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.208934 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f"] Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.209610 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.211310 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.212758 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.213036 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-x7hvr\"" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.213087 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.214400 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-8zhfm\"" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.216005 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-ngsws" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.313334 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kbm\" (UniqueName: \"kubernetes.io/projected/c1753c72-e77b-4e36-8782-7479a468e7fa-kube-api-access-l7kbm\") pod \"cert-manager-cainjector-7dbf76d5c8-tb56f\" (UID: \"c1753c72-e77b-4e36-8782-7479a468e7fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.313852 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1753c72-e77b-4e36-8782-7479a468e7fa-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tb56f\" (UID: \"c1753c72-e77b-4e36-8782-7479a468e7fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.415314 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1753c72-e77b-4e36-8782-7479a468e7fa-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tb56f\" (UID: \"c1753c72-e77b-4e36-8782-7479a468e7fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.415413 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kbm\" (UniqueName: \"kubernetes.io/projected/c1753c72-e77b-4e36-8782-7479a468e7fa-kube-api-access-l7kbm\") pod \"cert-manager-cainjector-7dbf76d5c8-tb56f\" (UID: \"c1753c72-e77b-4e36-8782-7479a468e7fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.438783 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1753c72-e77b-4e36-8782-7479a468e7fa-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tb56f\" (UID: \"c1753c72-e77b-4e36-8782-7479a468e7fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.439901 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kbm\" (UniqueName: \"kubernetes.io/projected/c1753c72-e77b-4e36-8782-7479a468e7fa-kube-api-access-l7kbm\") pod \"cert-manager-cainjector-7dbf76d5c8-tb56f\" (UID: \"c1753c72-e77b-4e36-8782-7479a468e7fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:26 crc kubenswrapper[5116]: I1209 14:28:26.541075 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" Dec 09 14:28:27 crc kubenswrapper[5116]: I1209 14:28:27.024390 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glxsj\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:27 crc kubenswrapper[5116]: I1209 14:28:27.045964 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxsj\" (UniqueName: \"kubernetes.io/projected/6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d-kube-api-access-glxsj\") pod \"cert-manager-webhook-7894b5b9b4-t9hvl\" (UID: \"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:27 crc kubenswrapper[5116]: I1209 14:28:27.132085 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.554908 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mwjpr"] Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.566191 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mwjpr"] Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.566331 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.632584 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-utilities\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.632747 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-catalog-content\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.632808 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljzm\" (UniqueName: \"kubernetes.io/projected/2196786e-546e-470b-a5f8-7ec8ac374e6a-kube-api-access-jljzm\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.758645 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-utilities\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.758728 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-catalog-content\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.758760 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jljzm\" (UniqueName: \"kubernetes.io/projected/2196786e-546e-470b-a5f8-7ec8ac374e6a-kube-api-access-jljzm\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.759547 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-utilities\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.759576 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-catalog-content\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.786831 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljzm\" (UniqueName: \"kubernetes.io/projected/2196786e-546e-470b-a5f8-7ec8ac374e6a-kube-api-access-jljzm\") pod \"community-operators-mwjpr\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:30 crc kubenswrapper[5116]: I1209 14:28:30.948094 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:33 crc kubenswrapper[5116]: I1209 14:28:33.365049 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f"] Dec 09 14:28:33 crc kubenswrapper[5116]: W1209 14:28:33.443628 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1753c72_e77b_4e36_8782_7479a468e7fa.slice/crio-4bf3f3404872b7b39a3fd0c0470e03469448b46d150e1581c0dcdd9b7cc16fb4 WatchSource:0}: Error finding container 4bf3f3404872b7b39a3fd0c0470e03469448b46d150e1581c0dcdd9b7cc16fb4: Status 404 returned error can't find the container with id 4bf3f3404872b7b39a3fd0c0470e03469448b46d150e1581c0dcdd9b7cc16fb4 Dec 09 14:28:33 crc kubenswrapper[5116]: I1209 14:28:33.694797 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl"] Dec 09 14:28:33 crc kubenswrapper[5116]: I1209 14:28:33.712829 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mwjpr"] Dec 09 14:28:33 crc kubenswrapper[5116]: W1209 14:28:33.721525 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b6978fa_eaf2_4f64_88ef_c4f1b3c8ca8d.slice/crio-2aa40a7de3d2fdc403e62e3106405897a5b3683a497fe4e64001e4d036aec239 WatchSource:0}: Error finding container 2aa40a7de3d2fdc403e62e3106405897a5b3683a497fe4e64001e4d036aec239: Status 404 returned error can't find the container with id 2aa40a7de3d2fdc403e62e3106405897a5b3683a497fe4e64001e4d036aec239 Dec 09 14:28:33 crc kubenswrapper[5116]: W1209 14:28:33.729236 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2196786e_546e_470b_a5f8_7ec8ac374e6a.slice/crio-21ad8a727b896f04a416064253222e26db0cc05d16cbda15599e00d40326923b WatchSource:0}: Error finding container 21ad8a727b896f04a416064253222e26db0cc05d16cbda15599e00d40326923b: Status 404 returned error can't find the container with id 21ad8a727b896f04a416064253222e26db0cc05d16cbda15599e00d40326923b Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.328376 5116 generic.go:358] "Generic (PLEG): container finished" podID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerID="875a7b87331351c54ddfb8882536d7d63e4326c2f8c0b2f563d99144f097ca6f" exitCode=0 Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.328487 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjpr" event={"ID":"2196786e-546e-470b-a5f8-7ec8ac374e6a","Type":"ContainerDied","Data":"875a7b87331351c54ddfb8882536d7d63e4326c2f8c0b2f563d99144f097ca6f"} Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.328536 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjpr" event={"ID":"2196786e-546e-470b-a5f8-7ec8ac374e6a","Type":"ContainerStarted","Data":"21ad8a727b896f04a416064253222e26db0cc05d16cbda15599e00d40326923b"} Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.330578 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" event={"ID":"c1753c72-e77b-4e36-8782-7479a468e7fa","Type":"ContainerStarted","Data":"4bf3f3404872b7b39a3fd0c0470e03469448b46d150e1581c0dcdd9b7cc16fb4"} Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.332728 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" event={"ID":"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d","Type":"ContainerStarted","Data":"2aa40a7de3d2fdc403e62e3106405897a5b3683a497fe4e64001e4d036aec239"} Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.336100 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"5f5d7fc6-3756-4d60-832e-45aace6c972d","Type":"ContainerStarted","Data":"eecff0285b4e8b46471c43328c0f5f2f1725aa1efda8ad08543a4d4fdd959c8e"} Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.525802 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:28:34 crc kubenswrapper[5116]: I1209 14:28:34.557791 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:28:35 crc kubenswrapper[5116]: I1209 14:28:35.347346 5116 generic.go:358] "Generic (PLEG): container finished" podID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerID="cb846284c1324c3a5e4f50483566709b32d1c3cd5f7b537dee0aae0c166a5f03" exitCode=0 Dec 09 14:28:35 crc kubenswrapper[5116]: I1209 14:28:35.347713 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjpr" event={"ID":"2196786e-546e-470b-a5f8-7ec8ac374e6a","Type":"ContainerDied","Data":"cb846284c1324c3a5e4f50483566709b32d1c3cd5f7b537dee0aae0c166a5f03"} Dec 09 14:28:36 crc kubenswrapper[5116]: I1209 14:28:36.362176 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjpr" event={"ID":"2196786e-546e-470b-a5f8-7ec8ac374e6a","Type":"ContainerStarted","Data":"f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2"} Dec 09 14:28:36 crc kubenswrapper[5116]: I1209 14:28:36.389339 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mwjpr" podStartSLOduration=5.887930277 podStartE2EDuration="6.389290544s" podCreationTimestamp="2025-12-09 14:28:30 +0000 UTC" firstStartedPulling="2025-12-09 14:28:34.331468792 +0000 UTC m=+852.853213600" lastFinishedPulling="2025-12-09 14:28:34.832829069 +0000 UTC m=+853.354573867" observedRunningTime="2025-12-09 14:28:36.3802876 +0000 UTC m=+854.902032408" watchObservedRunningTime="2025-12-09 14:28:36.389290544 +0000 UTC m=+854.911035362" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.065025 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-6t7wt"] Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.086053 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-6t7wt"] Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.086193 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.089092 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-gw8rx\"" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.143823 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4zm\" (UniqueName: \"kubernetes.io/projected/ea84e704-ec38-4e0d-beaa-c913272a209c-kube-api-access-pm4zm\") pod \"cert-manager-858d87f86b-6t7wt\" (UID: \"ea84e704-ec38-4e0d-beaa-c913272a209c\") " pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.143940 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea84e704-ec38-4e0d-beaa-c913272a209c-bound-sa-token\") pod \"cert-manager-858d87f86b-6t7wt\" (UID: \"ea84e704-ec38-4e0d-beaa-c913272a209c\") " pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.244605 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pm4zm\" (UniqueName: \"kubernetes.io/projected/ea84e704-ec38-4e0d-beaa-c913272a209c-kube-api-access-pm4zm\") pod \"cert-manager-858d87f86b-6t7wt\" (UID: \"ea84e704-ec38-4e0d-beaa-c913272a209c\") " pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.244859 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea84e704-ec38-4e0d-beaa-c913272a209c-bound-sa-token\") pod \"cert-manager-858d87f86b-6t7wt\" (UID: \"ea84e704-ec38-4e0d-beaa-c913272a209c\") " pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.264502 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea84e704-ec38-4e0d-beaa-c913272a209c-bound-sa-token\") pod \"cert-manager-858d87f86b-6t7wt\" (UID: \"ea84e704-ec38-4e0d-beaa-c913272a209c\") " pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.264618 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm4zm\" (UniqueName: \"kubernetes.io/projected/ea84e704-ec38-4e0d-beaa-c913272a209c-kube-api-access-pm4zm\") pod \"cert-manager-858d87f86b-6t7wt\" (UID: \"ea84e704-ec38-4e0d-beaa-c913272a209c\") " pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.368435 5116 generic.go:358] "Generic (PLEG): container finished" podID="5f5d7fc6-3756-4d60-832e-45aace6c972d" containerID="eecff0285b4e8b46471c43328c0f5f2f1725aa1efda8ad08543a4d4fdd959c8e" exitCode=0 Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.368539 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"5f5d7fc6-3756-4d60-832e-45aace6c972d","Type":"ContainerDied","Data":"eecff0285b4e8b46471c43328c0f5f2f1725aa1efda8ad08543a4d4fdd959c8e"} Dec 09 14:28:37 crc kubenswrapper[5116]: I1209 14:28:37.405455 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-6t7wt" Dec 09 14:28:40 crc kubenswrapper[5116]: I1209 14:28:40.949051 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:40 crc kubenswrapper[5116]: I1209 14:28:40.949361 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:41 crc kubenswrapper[5116]: I1209 14:28:41.006857 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:41 crc kubenswrapper[5116]: I1209 14:28:41.499382 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:41 crc kubenswrapper[5116]: I1209 14:28:41.538185 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mwjpr"] Dec 09 14:28:43 crc kubenswrapper[5116]: I1209 14:28:43.406310 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mwjpr" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="registry-server" containerID="cri-o://f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2" gracePeriod=2 Dec 09 14:28:45 crc kubenswrapper[5116]: I1209 14:28:45.823008 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.069198 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.069383 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.072402 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4d6zg\"" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.072611 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.072616 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.073813 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.145204 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.145264 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.145735 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.145895 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.145982 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.146021 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.146997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.147056 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.147077 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7q5c\" (UniqueName: \"kubernetes.io/projected/c465c9ea-b2d8-46b2-85c6-e628b568de25-kube-api-access-p7q5c\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.147101 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.147137 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.147158 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.247858 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.247914 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.247969 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248005 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248055 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248092 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248121 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248145 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248179 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248221 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248245 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7q5c\" (UniqueName: \"kubernetes.io/projected/c465c9ea-b2d8-46b2-85c6-e628b568de25-kube-api-access-p7q5c\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248278 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.248686 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250005 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250032 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250052 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250152 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250345 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250482 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.250817 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.251203 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.255410 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.255494 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.265371 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7q5c\" (UniqueName: \"kubernetes.io/projected/c465c9ea-b2d8-46b2-85c6-e628b568de25-kube-api-access-p7q5c\") pod \"service-telemetry-operator-1-build\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.392909 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.769674 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mwjpr_2196786e-546e-470b-a5f8-7ec8ac374e6a/registry-server/0.log" Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.771031 5116 generic.go:358] "Generic (PLEG): container finished" podID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerID="f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2" exitCode=137 Dec 09 14:28:46 crc kubenswrapper[5116]: I1209 14:28:46.771263 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjpr" event={"ID":"2196786e-546e-470b-a5f8-7ec8ac374e6a","Type":"ContainerDied","Data":"f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2"} Dec 09 14:28:48 crc kubenswrapper[5116]: I1209 14:28:48.990937 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:28:48 crc kubenswrapper[5116]: W1209 14:28:48.998621 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc465c9ea_b2d8_46b2_85c6_e628b568de25.slice/crio-0a73127377e13c53b4364f0ef70750687006adb1e0578b4ab77af6e428b7aa8d WatchSource:0}: Error finding container 0a73127377e13c53b4364f0ef70750687006adb1e0578b4ab77af6e428b7aa8d: Status 404 returned error can't find the container with id 0a73127377e13c53b4364f0ef70750687006adb1e0578b4ab77af6e428b7aa8d Dec 09 14:28:49 crc kubenswrapper[5116]: I1209 14:28:49.045015 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-6t7wt"] Dec 09 14:28:49 crc kubenswrapper[5116]: W1209 14:28:49.049631 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea84e704_ec38_4e0d_beaa_c913272a209c.slice/crio-b03427e2c03923fa77c33431cd801356fda03ba2d348f270992c5769cd580e81 WatchSource:0}: Error finding container b03427e2c03923fa77c33431cd801356fda03ba2d348f270992c5769cd580e81: Status 404 returned error can't find the container with id b03427e2c03923fa77c33431cd801356fda03ba2d348f270992c5769cd580e81 Dec 09 14:28:49 crc kubenswrapper[5116]: I1209 14:28:49.797891 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-6t7wt" event={"ID":"ea84e704-ec38-4e0d-beaa-c913272a209c","Type":"ContainerStarted","Data":"b03427e2c03923fa77c33431cd801356fda03ba2d348f270992c5769cd580e81"} Dec 09 14:28:49 crc kubenswrapper[5116]: I1209 14:28:49.800692 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c465c9ea-b2d8-46b2-85c6-e628b568de25","Type":"ContainerStarted","Data":"0a73127377e13c53b4364f0ef70750687006adb1e0578b4ab77af6e428b7aa8d"} Dec 09 14:28:51 crc kubenswrapper[5116]: E1209 14:28:51.392500 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2 is running failed: container process not found" containerID="f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:28:51 crc kubenswrapper[5116]: E1209 14:28:51.393318 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2 is running failed: container process not found" containerID="f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:28:51 crc kubenswrapper[5116]: E1209 14:28:51.393568 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2 is running failed: container process not found" containerID="f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:28:51 crc kubenswrapper[5116]: E1209 14:28:51.393599 5116 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-mwjpr" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="registry-server" probeResult="unknown" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.166905 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.166999 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.167050 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.167751 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7315817fcea3499a7635fc1289860612fc7524332d75eb55b4b0ddd1ffdb8798"} pod="openshift-machine-config-operator/machine-config-daemon-phdhk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.167830 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" containerID="cri-o://7315817fcea3499a7635fc1289860612fc7524332d75eb55b4b0ddd1ffdb8798" gracePeriod=600 Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.323511 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mwjpr_2196786e-546e-470b-a5f8-7ec8ac374e6a/registry-server/0.log" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.324345 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.436495 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-catalog-content\") pod \"2196786e-546e-470b-a5f8-7ec8ac374e6a\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.436612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-utilities\") pod \"2196786e-546e-470b-a5f8-7ec8ac374e6a\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.436715 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jljzm\" (UniqueName: \"kubernetes.io/projected/2196786e-546e-470b-a5f8-7ec8ac374e6a-kube-api-access-jljzm\") pod \"2196786e-546e-470b-a5f8-7ec8ac374e6a\" (UID: \"2196786e-546e-470b-a5f8-7ec8ac374e6a\") " Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.438282 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-utilities" (OuterVolumeSpecName: "utilities") pod "2196786e-546e-470b-a5f8-7ec8ac374e6a" (UID: "2196786e-546e-470b-a5f8-7ec8ac374e6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.448544 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2196786e-546e-470b-a5f8-7ec8ac374e6a-kube-api-access-jljzm" (OuterVolumeSpecName: "kube-api-access-jljzm") pod "2196786e-546e-470b-a5f8-7ec8ac374e6a" (UID: "2196786e-546e-470b-a5f8-7ec8ac374e6a"). InnerVolumeSpecName "kube-api-access-jljzm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.490061 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2196786e-546e-470b-a5f8-7ec8ac374e6a" (UID: "2196786e-546e-470b-a5f8-7ec8ac374e6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.538836 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jljzm\" (UniqueName: \"kubernetes.io/projected/2196786e-546e-470b-a5f8-7ec8ac374e6a-kube-api-access-jljzm\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.538876 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.538889 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2196786e-546e-470b-a5f8-7ec8ac374e6a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.822570 5116 generic.go:358] "Generic (PLEG): container finished" podID="140ab739-f0e3-4429-8e23-03782755777d" containerID="7315817fcea3499a7635fc1289860612fc7524332d75eb55b4b0ddd1ffdb8798" exitCode=0 Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.822654 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerDied","Data":"7315817fcea3499a7635fc1289860612fc7524332d75eb55b4b0ddd1ffdb8798"} Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.822695 5116 scope.go:117] "RemoveContainer" containerID="63fa14cef65c6ac709b2413472d850235cc43d843f7e025a50cc9050b6ff3247" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.824146 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mwjpr_2196786e-546e-470b-a5f8-7ec8ac374e6a/registry-server/0.log" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.824755 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mwjpr" event={"ID":"2196786e-546e-470b-a5f8-7ec8ac374e6a","Type":"ContainerDied","Data":"21ad8a727b896f04a416064253222e26db0cc05d16cbda15599e00d40326923b"} Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.824832 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mwjpr" Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.857006 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mwjpr"] Dec 09 14:28:52 crc kubenswrapper[5116]: I1209 14:28:52.866101 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mwjpr"] Dec 09 14:28:53 crc kubenswrapper[5116]: I1209 14:28:53.546123 5116 scope.go:117] "RemoveContainer" containerID="f28a2c115446e198a0715918a1e33925dcd239b3da05fe2e59c359c62ad51be2" Dec 09 14:28:53 crc kubenswrapper[5116]: I1209 14:28:53.568474 5116 scope.go:117] "RemoveContainer" containerID="cb846284c1324c3a5e4f50483566709b32d1c3cd5f7b537dee0aae0c166a5f03" Dec 09 14:28:53 crc kubenswrapper[5116]: I1209 14:28:53.583973 5116 scope.go:117] "RemoveContainer" containerID="875a7b87331351c54ddfb8882536d7d63e4326c2f8c0b2f563d99144f097ca6f" Dec 09 14:28:53 crc kubenswrapper[5116]: I1209 14:28:53.762386 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" path="/var/lib/kubelet/pods/2196786e-546e-470b-a5f8-7ec8ac374e6a/volumes" Dec 09 14:28:53 crc kubenswrapper[5116]: I1209 14:28:53.846604 5116 generic.go:358] "Generic (PLEG): container finished" podID="5f5d7fc6-3756-4d60-832e-45aace6c972d" containerID="15923d85512c0f3b0a67b1282feb933bf8f72ca52c897c34185cc3f1f061e854" exitCode=0 Dec 09 14:28:53 crc kubenswrapper[5116]: I1209 14:28:53.846743 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"5f5d7fc6-3756-4d60-832e-45aace6c972d","Type":"ContainerDied","Data":"15923d85512c0f3b0a67b1282feb933bf8f72ca52c897c34185cc3f1f061e854"} Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.863875 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" event={"ID":"6b6978fa-eaf2-4f64-88ef-c4f1b3c8ca8d","Type":"ContainerStarted","Data":"f81ef1fc47dcfce64d97d0051b21c0d607b1c2849b25970fe0d62d0d29161dcd"} Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.865484 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.869407 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"5f5d7fc6-3756-4d60-832e-45aace6c972d","Type":"ContainerStarted","Data":"c5f5b390ae87adc89633f01f8f5fa9e98bc46037253575acf2607b2d740b5f19"} Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.869776 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.872080 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"c49f65dc7d0b501013e6e6317fab8447f902413dcd36308e941e8ac6d3a050d5"} Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.874129 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" event={"ID":"c1753c72-e77b-4e36-8782-7479a468e7fa","Type":"ContainerStarted","Data":"757030dc41fd20a4581a3e1f036e10a47e3d277648cfde3a12ce4c2f1d235544"} Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.876602 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-6t7wt" event={"ID":"ea84e704-ec38-4e0d-beaa-c913272a209c","Type":"ContainerStarted","Data":"cf3cf52b439f578e252dfddf9f204205f77a2c1d70d3872332828858f3520f46"} Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.883299 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" podStartSLOduration=13.841273738 podStartE2EDuration="33.883272801s" podCreationTimestamp="2025-12-09 14:28:21 +0000 UTC" firstStartedPulling="2025-12-09 14:28:33.726834734 +0000 UTC m=+852.248579542" lastFinishedPulling="2025-12-09 14:28:53.768833797 +0000 UTC m=+872.290578605" observedRunningTime="2025-12-09 14:28:54.880462588 +0000 UTC m=+873.402207386" watchObservedRunningTime="2025-12-09 14:28:54.883272801 +0000 UTC m=+873.405017599" Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.900761 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tb56f" podStartSLOduration=12.507805767 podStartE2EDuration="32.900738916s" podCreationTimestamp="2025-12-09 14:28:22 +0000 UTC" firstStartedPulling="2025-12-09 14:28:33.445795579 +0000 UTC m=+851.967540377" lastFinishedPulling="2025-12-09 14:28:53.838728718 +0000 UTC m=+872.360473526" observedRunningTime="2025-12-09 14:28:54.895702065 +0000 UTC m=+873.417446863" watchObservedRunningTime="2025-12-09 14:28:54.900738916 +0000 UTC m=+873.422483714" Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.948405 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=23.549342872 podStartE2EDuration="39.948383998s" podCreationTimestamp="2025-12-09 14:28:15 +0000 UTC" firstStartedPulling="2025-12-09 14:28:17.320388165 +0000 UTC m=+835.842132963" lastFinishedPulling="2025-12-09 14:28:33.719429291 +0000 UTC m=+852.241174089" observedRunningTime="2025-12-09 14:28:54.940700048 +0000 UTC m=+873.462444846" watchObservedRunningTime="2025-12-09 14:28:54.948383998 +0000 UTC m=+873.470128796" Dec 09 14:28:54 crc kubenswrapper[5116]: I1209 14:28:54.976888 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-6t7wt" podStartSLOduration=12.819533376 podStartE2EDuration="17.97686835s" podCreationTimestamp="2025-12-09 14:28:37 +0000 UTC" firstStartedPulling="2025-12-09 14:28:49.052512367 +0000 UTC m=+867.574257165" lastFinishedPulling="2025-12-09 14:28:54.209847341 +0000 UTC m=+872.731592139" observedRunningTime="2025-12-09 14:28:54.973970325 +0000 UTC m=+873.495715123" watchObservedRunningTime="2025-12-09 14:28:54.97686835 +0000 UTC m=+873.498613158" Dec 09 14:28:55 crc kubenswrapper[5116]: I1209 14:28:55.131225 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.733691 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.734785 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="registry-server" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.734805 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="registry-server" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.734847 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="extract-utilities" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.734856 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="extract-utilities" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.734869 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="extract-content" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.734875 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="extract-content" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.735009 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="2196786e-546e-470b-a5f8-7ec8ac374e6a" containerName="registry-server" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.749220 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.756245 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.756855 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.757142 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.760220 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891429 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891510 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891552 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891624 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891711 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891740 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891765 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cll4\" (UniqueName: \"kubernetes.io/projected/fcbd9a7f-cee1-4604-b31d-4b060974cbed-kube-api-access-9cll4\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.891830 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.892020 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.892055 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.892079 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.892109 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992798 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992849 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992868 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9cll4\" (UniqueName: \"kubernetes.io/projected/fcbd9a7f-cee1-4604-b31d-4b060974cbed-kube-api-access-9cll4\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992887 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992922 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992941 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.992992 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993540 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993635 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993687 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993731 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993840 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993874 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993892 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.993995 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.994022 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.994034 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.994226 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.994263 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.994372 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.995176 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.999602 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:56 crc kubenswrapper[5116]: I1209 14:28:56.999626 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:57 crc kubenswrapper[5116]: I1209 14:28:57.020502 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cll4\" (UniqueName: \"kubernetes.io/projected/fcbd9a7f-cee1-4604-b31d-4b060974cbed-kube-api-access-9cll4\") pod \"service-telemetry-operator-2-build\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:57 crc kubenswrapper[5116]: I1209 14:28:57.073402 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:28:59 crc kubenswrapper[5116]: I1209 14:28:59.676174 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:28:59 crc kubenswrapper[5116]: W1209 14:28:59.680394 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcbd9a7f_cee1_4604_b31d_4b060974cbed.slice/crio-e6d3a1a02fcab684629eb07107d029bf9e0fd873b60675ee0a9fc5bcac435bc8 WatchSource:0}: Error finding container e6d3a1a02fcab684629eb07107d029bf9e0fd873b60675ee0a9fc5bcac435bc8: Status 404 returned error can't find the container with id e6d3a1a02fcab684629eb07107d029bf9e0fd873b60675ee0a9fc5bcac435bc8 Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.039218 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fcbd9a7f-cee1-4604-b31d-4b060974cbed","Type":"ContainerStarted","Data":"1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204"} Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.039291 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fcbd9a7f-cee1-4604-b31d-4b060974cbed","Type":"ContainerStarted","Data":"e6d3a1a02fcab684629eb07107d029bf9e0fd873b60675ee0a9fc5bcac435bc8"} Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.040898 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="c465c9ea-b2d8-46b2-85c6-e628b568de25" containerName="manage-dockerfile" containerID="cri-o://774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece" gracePeriod=30 Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.040900 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c465c9ea-b2d8-46b2-85c6-e628b568de25","Type":"ContainerStarted","Data":"774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece"} Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.120487 5116 ???:1] "http: TLS handshake error from 192.168.126.11:41428: no serving certificate available for the kubelet" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.487066 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c465c9ea-b2d8-46b2-85c6-e628b568de25/manage-dockerfile/0.log" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.487394 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.567811 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildworkdir\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568350 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568404 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-blob-cache\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568431 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-push\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568623 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568661 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildcachedir\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568690 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568715 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-ca-bundles\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.568903 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-proxy-ca-bundles\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569037 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-system-configs\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569108 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-root\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569163 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-node-pullsecrets\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569195 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-pull\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569222 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7q5c\" (UniqueName: \"kubernetes.io/projected/c465c9ea-b2d8-46b2-85c6-e628b568de25-kube-api-access-p7q5c\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569264 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-run\") pod \"c465c9ea-b2d8-46b2-85c6-e628b568de25\" (UID: \"c465c9ea-b2d8-46b2-85c6-e628b568de25\") " Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569366 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569541 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569677 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569740 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569756 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569767 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569780 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569791 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569804 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569860 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.569897 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.570531 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.573863 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-pull" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-pull") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "builder-dockercfg-4d6zg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.588134 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-push" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-push") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "builder-dockercfg-4d6zg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.589163 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c465c9ea-b2d8-46b2-85c6-e628b568de25-kube-api-access-p7q5c" (OuterVolumeSpecName: "kube-api-access-p7q5c") pod "c465c9ea-b2d8-46b2-85c6-e628b568de25" (UID: "c465c9ea-b2d8-46b2-85c6-e628b568de25"). InnerVolumeSpecName "kube-api-access-p7q5c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.670471 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c465c9ea-b2d8-46b2-85c6-e628b568de25-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.670759 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.670772 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c465c9ea-b2d8-46b2-85c6-e628b568de25-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.670784 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c465c9ea-b2d8-46b2-85c6-e628b568de25-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.670795 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/c465c9ea-b2d8-46b2-85c6-e628b568de25-builder-dockercfg-4d6zg-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.670807 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7q5c\" (UniqueName: \"kubernetes.io/projected/c465c9ea-b2d8-46b2-85c6-e628b568de25-kube-api-access-p7q5c\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:00 crc kubenswrapper[5116]: I1209 14:29:00.889087 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t9hvl" Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.048265 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c465c9ea-b2d8-46b2-85c6-e628b568de25/manage-dockerfile/0.log" Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.048312 5116 generic.go:358] "Generic (PLEG): container finished" podID="c465c9ea-b2d8-46b2-85c6-e628b568de25" containerID="774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece" exitCode=1 Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.048404 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.049214 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c465c9ea-b2d8-46b2-85c6-e628b568de25","Type":"ContainerDied","Data":"774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece"} Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.049247 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c465c9ea-b2d8-46b2-85c6-e628b568de25","Type":"ContainerDied","Data":"0a73127377e13c53b4364f0ef70750687006adb1e0578b4ab77af6e428b7aa8d"} Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.049264 5116 scope.go:117] "RemoveContainer" containerID="774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece" Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.070122 5116 scope.go:117] "RemoveContainer" containerID="774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece" Dec 09 14:29:01 crc kubenswrapper[5116]: E1209 14:29:01.070675 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece\": container with ID starting with 774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece not found: ID does not exist" containerID="774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece" Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.070716 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece"} err="failed to get container status \"774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece\": rpc error: code = NotFound desc = could not find container \"774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece\": container with ID starting with 774ada729f8a087d5c205c6eb3d06ff0596ce90e5fa9c675cbd6024132869ece not found: ID does not exist" Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.091477 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.100354 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.150077 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:29:01 crc kubenswrapper[5116]: I1209 14:29:01.756898 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c465c9ea-b2d8-46b2-85c6-e628b568de25" path="/var/lib/kubelet/pods/c465c9ea-b2d8-46b2-85c6-e628b568de25/volumes" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.054568 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="fcbd9a7f-cee1-4604-b31d-4b060974cbed" containerName="git-clone" containerID="cri-o://1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204" gracePeriod=30 Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.426237 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_fcbd9a7f-cee1-4604-b31d-4b060974cbed/git-clone/0.log" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.426322 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496662 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildworkdir\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496745 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-blob-cache\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496775 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-run\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496799 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cll4\" (UniqueName: \"kubernetes.io/projected/fcbd9a7f-cee1-4604-b31d-4b060974cbed-kube-api-access-9cll4\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496835 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-node-pullsecrets\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496852 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-ca-bundles\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496872 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-system-configs\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496897 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-pull\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496920 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-root\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.496975 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildcachedir\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.497050 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-proxy-ca-bundles\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.497086 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-push\") pod \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\" (UID: \"fcbd9a7f-cee1-4604-b31d-4b060974cbed\") " Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.497414 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.497711 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.497863 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.499300 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.499402 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.499459 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.500005 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.500026 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.500117 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.512286 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-pull" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-pull") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "builder-dockercfg-4d6zg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.512365 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcbd9a7f-cee1-4604-b31d-4b060974cbed-kube-api-access-9cll4" (OuterVolumeSpecName: "kube-api-access-9cll4") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "kube-api-access-9cll4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.513235 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-push" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-push") pod "fcbd9a7f-cee1-4604-b31d-4b060974cbed" (UID: "fcbd9a7f-cee1-4604-b31d-4b060974cbed"). InnerVolumeSpecName "builder-dockercfg-4d6zg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598353 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9cll4\" (UniqueName: \"kubernetes.io/projected/fcbd9a7f-cee1-4604-b31d-4b060974cbed-kube-api-access-9cll4\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598401 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598409 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598417 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598426 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598435 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598443 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598451 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598459 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/fcbd9a7f-cee1-4604-b31d-4b060974cbed-builder-dockercfg-4d6zg-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598468 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598477 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:02 crc kubenswrapper[5116]: I1209 14:29:02.598485 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcbd9a7f-cee1-4604-b31d-4b060974cbed-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.060597 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_fcbd9a7f-cee1-4604-b31d-4b060974cbed/git-clone/0.log" Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.060644 5116 generic.go:358] "Generic (PLEG): container finished" podID="fcbd9a7f-cee1-4604-b31d-4b060974cbed" containerID="1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204" exitCode=1 Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.060751 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.060755 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fcbd9a7f-cee1-4604-b31d-4b060974cbed","Type":"ContainerDied","Data":"1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204"} Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.061224 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fcbd9a7f-cee1-4604-b31d-4b060974cbed","Type":"ContainerDied","Data":"e6d3a1a02fcab684629eb07107d029bf9e0fd873b60675ee0a9fc5bcac435bc8"} Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.061255 5116 scope.go:117] "RemoveContainer" containerID="1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204" Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.080867 5116 scope.go:117] "RemoveContainer" containerID="1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204" Dec 09 14:29:03 crc kubenswrapper[5116]: E1209 14:29:03.082042 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204\": container with ID starting with 1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204 not found: ID does not exist" containerID="1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204" Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.082090 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204"} err="failed to get container status \"1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204\": rpc error: code = NotFound desc = could not find container \"1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204\": container with ID starting with 1186f66f0123b242a0b562e5ef6b095be7b6ca0b9110c77c298175a04be0a204 not found: ID does not exist" Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.097567 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.101406 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:29:03 crc kubenswrapper[5116]: I1209 14:29:03.756910 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcbd9a7f-cee1-4604-b31d-4b060974cbed" path="/var/lib/kubelet/pods/fcbd9a7f-cee1-4604-b31d-4b060974cbed/volumes" Dec 09 14:29:05 crc kubenswrapper[5116]: I1209 14:29:05.981358 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="5f5d7fc6-3756-4d60-832e-45aace6c972d" containerName="elasticsearch" probeResult="failure" output=< Dec 09 14:29:05 crc kubenswrapper[5116]: {"timestamp": "2025-12-09T14:29:05+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 09 14:29:05 crc kubenswrapper[5116]: > Dec 09 14:29:10 crc kubenswrapper[5116]: I1209 14:29:10.971606 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="5f5d7fc6-3756-4d60-832e-45aace6c972d" containerName="elasticsearch" probeResult="failure" output=< Dec 09 14:29:10 crc kubenswrapper[5116]: {"timestamp": "2025-12-09T14:29:10+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 09 14:29:10 crc kubenswrapper[5116]: > Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.600121 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.601090 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c465c9ea-b2d8-46b2-85c6-e628b568de25" containerName="manage-dockerfile" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.601115 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c465c9ea-b2d8-46b2-85c6-e628b568de25" containerName="manage-dockerfile" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.601151 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcbd9a7f-cee1-4604-b31d-4b060974cbed" containerName="git-clone" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.601159 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcbd9a7f-cee1-4604-b31d-4b060974cbed" containerName="git-clone" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.601418 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="c465c9ea-b2d8-46b2-85c6-e628b568de25" containerName="manage-dockerfile" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.601443 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="fcbd9a7f-cee1-4604-b31d-4b060974cbed" containerName="git-clone" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.605318 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.607597 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4d6zg\"" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.608018 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.608727 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.611541 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.623758 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753660 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753709 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753734 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753751 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753777 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753804 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753823 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753841 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753856 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753887 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753917 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.753938 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks65\" (UniqueName: \"kubernetes.io/projected/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-kube-api-access-9ks65\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855189 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855249 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855286 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855333 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855370 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855407 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855431 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855493 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.855912 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856043 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856162 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856157 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856192 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856341 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856581 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856635 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856658 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ks65\" (UniqueName: \"kubernetes.io/projected/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-kube-api-access-9ks65\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856735 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.856775 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.857483 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.861935 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.871392 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.874086 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ks65\" (UniqueName: \"kubernetes.io/projected/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-kube-api-access-9ks65\") pod \"service-telemetry-operator-3-build\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:12 crc kubenswrapper[5116]: I1209 14:29:12.919186 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:13 crc kubenswrapper[5116]: I1209 14:29:13.517843 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:29:14 crc kubenswrapper[5116]: I1209 14:29:14.138133 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"ffb38963-91c9-43d4-b582-79a5ef9d4fe7","Type":"ContainerStarted","Data":"9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7"} Dec 09 14:29:14 crc kubenswrapper[5116]: I1209 14:29:14.138393 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"ffb38963-91c9-43d4-b582-79a5ef9d4fe7","Type":"ContainerStarted","Data":"ae112faa1f73cb78574c8cf5257e8be26c70a6c6b961c87056ee6f039c2bf04a"} Dec 09 14:29:14 crc kubenswrapper[5116]: I1209 14:29:14.193087 5116 ???:1] "http: TLS handshake error from 192.168.126.11:37670: no serving certificate available for the kubelet" Dec 09 14:29:15 crc kubenswrapper[5116]: I1209 14:29:15.224188 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.232575 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="ffb38963-91c9-43d4-b582-79a5ef9d4fe7" containerName="git-clone" containerID="cri-o://9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7" gracePeriod=30 Dec 09 14:29:16 crc kubenswrapper[5116]: E1209 14:29:16.265046 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb38963_91c9_43d4_b582_79a5ef9d4fe7.slice/crio-conmon-9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.283593 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.677916 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_ffb38963-91c9-43d4-b582-79a5ef9d4fe7/git-clone/0.log" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.678220 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.811672 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-ca-bundles\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.811788 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-run\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.811828 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-pull\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.811982 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ks65\" (UniqueName: \"kubernetes.io/projected/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-kube-api-access-9ks65\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812052 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-system-configs\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812132 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-push\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812157 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildcachedir\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812206 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-root\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812251 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-node-pullsecrets\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812285 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-proxy-ca-bundles\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812317 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildworkdir\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812405 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-blob-cache\") pod \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\" (UID: \"ffb38963-91c9-43d4-b582-79a5ef9d4fe7\") " Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812423 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.812650 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.813044 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.813092 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.813253 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.813292 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.813659 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.813829 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.814381 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.814527 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.818359 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-push" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-push") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "builder-dockercfg-4d6zg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.825161 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-pull" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-pull") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "builder-dockercfg-4d6zg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.827890 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-kube-api-access-9ks65" (OuterVolumeSpecName: "kube-api-access-9ks65") pod "ffb38963-91c9-43d4-b582-79a5ef9d4fe7" (UID: "ffb38963-91c9-43d4-b582-79a5ef9d4fe7"). InnerVolumeSpecName "kube-api-access-9ks65". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914164 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ks65\" (UniqueName: \"kubernetes.io/projected/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-kube-api-access-9ks65\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914217 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914234 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914249 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914262 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914276 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914288 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914304 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914316 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914328 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:16 crc kubenswrapper[5116]: I1209 14:29:16.914341 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/ffb38963-91c9-43d4-b582-79a5ef9d4fe7-builder-dockercfg-4d6zg-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.243192 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_ffb38963-91c9-43d4-b582-79a5ef9d4fe7/git-clone/0.log" Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.243244 5116 generic.go:358] "Generic (PLEG): container finished" podID="ffb38963-91c9-43d4-b582-79a5ef9d4fe7" containerID="9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7" exitCode=1 Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.243422 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.243432 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"ffb38963-91c9-43d4-b582-79a5ef9d4fe7","Type":"ContainerDied","Data":"9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7"} Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.243492 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"ffb38963-91c9-43d4-b582-79a5ef9d4fe7","Type":"ContainerDied","Data":"ae112faa1f73cb78574c8cf5257e8be26c70a6c6b961c87056ee6f039c2bf04a"} Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.243515 5116 scope.go:117] "RemoveContainer" containerID="9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7" Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.272161 5116 scope.go:117] "RemoveContainer" containerID="9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7" Dec 09 14:29:17 crc kubenswrapper[5116]: E1209 14:29:17.272523 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7\": container with ID starting with 9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7 not found: ID does not exist" containerID="9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7" Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.272556 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7"} err="failed to get container status \"9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7\": rpc error: code = NotFound desc = could not find container \"9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7\": container with ID starting with 9eeb16950e261e249b836a4bbf138a31f6703a9c94056994ca6d2261050f99b7 not found: ID does not exist" Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.298943 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.309139 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:29:17 crc kubenswrapper[5116]: I1209 14:29:17.757398 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb38963-91c9-43d4-b582-79a5ef9d4fe7" path="/var/lib/kubelet/pods/ffb38963-91c9-43d4-b582-79a5ef9d4fe7/volumes" Dec 09 14:29:22 crc kubenswrapper[5116]: I1209 14:29:22.103492 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-554lf_2a441b53-f957-4f01-a123-a96c637c3fe2/kube-multus/0.log" Dec 09 14:29:22 crc kubenswrapper[5116]: I1209 14:29:22.109080 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-554lf_2a441b53-f957-4f01-a123-a96c637c3fe2/kube-multus/0.log" Dec 09 14:29:22 crc kubenswrapper[5116]: I1209 14:29:22.116644 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:29:22 crc kubenswrapper[5116]: I1209 14:29:22.121300 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.634528 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.635366 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffb38963-91c9-43d4-b582-79a5ef9d4fe7" containerName="git-clone" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.635380 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb38963-91c9-43d4-b582-79a5ef9d4fe7" containerName="git-clone" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.635495 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffb38963-91c9-43d4-b582-79a5ef9d4fe7" containerName="git-clone" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.795789 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.795879 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.801344 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.801473 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.802009 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4d6zg\"" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.802724 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845026 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845100 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845187 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845278 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845358 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845446 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845480 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845512 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlnnv\" (UniqueName: \"kubernetes.io/projected/83ef3606-a812-4a49-a4b4-2259632b6f37-kube-api-access-wlnnv\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845556 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845625 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845797 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.845855 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.946752 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947115 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947163 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947190 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947241 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947266 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947289 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947308 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947329 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947363 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947382 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947397 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlnnv\" (UniqueName: \"kubernetes.io/projected/83ef3606-a812-4a49-a4b4-2259632b6f37-kube-api-access-wlnnv\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.947680 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.948116 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.948378 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.948517 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.948788 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.948947 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.948926 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.949052 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.950124 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.956349 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.956482 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:26 crc kubenswrapper[5116]: I1209 14:29:26.981192 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlnnv\" (UniqueName: \"kubernetes.io/projected/83ef3606-a812-4a49-a4b4-2259632b6f37-kube-api-access-wlnnv\") pod \"service-telemetry-operator-4-build\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:27 crc kubenswrapper[5116]: I1209 14:29:27.119967 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:27 crc kubenswrapper[5116]: I1209 14:29:27.605817 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:29:28 crc kubenswrapper[5116]: I1209 14:29:28.316836 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"83ef3606-a812-4a49-a4b4-2259632b6f37","Type":"ContainerStarted","Data":"4f16c1a77b0aa73e362b3fe466d13d835d46028dcf68f257844004cb3a143c71"} Dec 09 14:29:28 crc kubenswrapper[5116]: I1209 14:29:28.317193 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"83ef3606-a812-4a49-a4b4-2259632b6f37","Type":"ContainerStarted","Data":"c2212d1304768793b26e7e0071f29d9bf0be4834e1632b5c645db640795c3f80"} Dec 09 14:29:28 crc kubenswrapper[5116]: I1209 14:29:28.381596 5116 ???:1] "http: TLS handshake error from 192.168.126.11:58744: no serving certificate available for the kubelet" Dec 09 14:29:29 crc kubenswrapper[5116]: I1209 14:29:29.418747 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:29:30 crc kubenswrapper[5116]: I1209 14:29:30.328765 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="83ef3606-a812-4a49-a4b4-2259632b6f37" containerName="git-clone" containerID="cri-o://4f16c1a77b0aa73e362b3fe466d13d835d46028dcf68f257844004cb3a143c71" gracePeriod=30 Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.343925 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_83ef3606-a812-4a49-a4b4-2259632b6f37/git-clone/0.log" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.344211 5116 generic.go:358] "Generic (PLEG): container finished" podID="83ef3606-a812-4a49-a4b4-2259632b6f37" containerID="4f16c1a77b0aa73e362b3fe466d13d835d46028dcf68f257844004cb3a143c71" exitCode=1 Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.344364 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"83ef3606-a812-4a49-a4b4-2259632b6f37","Type":"ContainerDied","Data":"4f16c1a77b0aa73e362b3fe466d13d835d46028dcf68f257844004cb3a143c71"} Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.846209 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_83ef3606-a812-4a49-a4b4-2259632b6f37/git-clone/0.log" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.846535 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925583 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-run\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925622 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-ca-bundles\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925643 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-build-blob-cache\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925667 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-root\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925694 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-proxy-ca-bundles\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925710 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-push\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925743 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlnnv\" (UniqueName: \"kubernetes.io/projected/83ef3606-a812-4a49-a4b4-2259632b6f37-kube-api-access-wlnnv\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925774 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-buildcachedir\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925812 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-node-pullsecrets\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925850 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-pull\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925876 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-system-configs\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.925892 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-buildworkdir\") pod \"83ef3606-a812-4a49-a4b4-2259632b6f37\" (UID: \"83ef3606-a812-4a49-a4b4-2259632b6f37\") " Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.926591 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.926647 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.926760 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.927007 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.927224 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.927423 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.927149 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.927475 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.927496 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.932107 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-push" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-push") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "builder-dockercfg-4d6zg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.932133 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-pull" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-pull") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "builder-dockercfg-4d6zg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:31 crc kubenswrapper[5116]: I1209 14:29:31.932166 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ef3606-a812-4a49-a4b4-2259632b6f37-kube-api-access-wlnnv" (OuterVolumeSpecName: "kube-api-access-wlnnv") pod "83ef3606-a812-4a49-a4b4-2259632b6f37" (UID: "83ef3606-a812-4a49-a4b4-2259632b6f37"). InnerVolumeSpecName "kube-api-access-wlnnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027294 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027332 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027343 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027352 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027360 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027369 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027377 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027384 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/83ef3606-a812-4a49-a4b4-2259632b6f37-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027392 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83ef3606-a812-4a49-a4b4-2259632b6f37-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027400 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/83ef3606-a812-4a49-a4b4-2259632b6f37-builder-dockercfg-4d6zg-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027407 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlnnv\" (UniqueName: \"kubernetes.io/projected/83ef3606-a812-4a49-a4b4-2259632b6f37-kube-api-access-wlnnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.027415 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/83ef3606-a812-4a49-a4b4-2259632b6f37-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.354386 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_83ef3606-a812-4a49-a4b4-2259632b6f37/git-clone/0.log" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.354598 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"83ef3606-a812-4a49-a4b4-2259632b6f37","Type":"ContainerDied","Data":"c2212d1304768793b26e7e0071f29d9bf0be4834e1632b5c645db640795c3f80"} Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.354660 5116 scope.go:117] "RemoveContainer" containerID="4f16c1a77b0aa73e362b3fe466d13d835d46028dcf68f257844004cb3a143c71" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.354698 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.421008 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:29:32 crc kubenswrapper[5116]: I1209 14:29:32.429332 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:29:33 crc kubenswrapper[5116]: I1209 14:29:33.759522 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ef3606-a812-4a49-a4b4-2259632b6f37" path="/var/lib/kubelet/pods/83ef3606-a812-4a49-a4b4-2259632b6f37/volumes" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.888790 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.890686 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83ef3606-a812-4a49-a4b4-2259632b6f37" containerName="git-clone" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.890855 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ef3606-a812-4a49-a4b4-2259632b6f37" containerName="git-clone" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.891496 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="83ef3606-a812-4a49-a4b4-2259632b6f37" containerName="git-clone" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.908297 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.908522 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.955782 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.956079 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4d6zg\"" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.956190 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 09 14:29:40 crc kubenswrapper[5116]: I1209 14:29:40.956233 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058647 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4bq\" (UniqueName: \"kubernetes.io/projected/bef17583-2e8b-4675-84b3-1974406477bd-kube-api-access-2q4bq\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058701 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058726 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058803 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058820 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058836 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.058926 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.059054 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.059180 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.059214 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.059245 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.059285 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161332 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161400 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161451 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161433 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161539 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161637 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161758 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161816 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161874 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.161944 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.162137 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4bq\" (UniqueName: \"kubernetes.io/projected/bef17583-2e8b-4675-84b3-1974406477bd-kube-api-access-2q4bq\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.162208 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.162233 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.162272 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.162138 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.163176 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.163251 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.163503 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.163906 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.163998 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.164644 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.169492 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.181673 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-push\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.195045 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4bq\" (UniqueName: \"kubernetes.io/projected/bef17583-2e8b-4675-84b3-1974406477bd-kube-api-access-2q4bq\") pod \"service-telemetry-operator-5-build\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.285093 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:41 crc kubenswrapper[5116]: I1209 14:29:41.524613 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:29:42 crc kubenswrapper[5116]: I1209 14:29:42.434467 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"bef17583-2e8b-4675-84b3-1974406477bd","Type":"ContainerStarted","Data":"171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd"} Dec 09 14:29:42 crc kubenswrapper[5116]: I1209 14:29:42.434776 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"bef17583-2e8b-4675-84b3-1974406477bd","Type":"ContainerStarted","Data":"45a5443e098e82d5c6ba4343232c641b2f47958ad365500bb30adf08bd46b9bb"} Dec 09 14:29:42 crc kubenswrapper[5116]: I1209 14:29:42.480147 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50004: no serving certificate available for the kubelet" Dec 09 14:29:43 crc kubenswrapper[5116]: I1209 14:29:43.510105 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.447616 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="bef17583-2e8b-4675-84b3-1974406477bd" containerName="git-clone" containerID="cri-o://171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd" gracePeriod=30 Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.837279 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_bef17583-2e8b-4675-84b3-1974406477bd/git-clone/0.log" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.837642 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.915796 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-node-pullsecrets\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.916183 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-push\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.916316 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-ca-bundles\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.916883 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-root\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917031 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-run\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917144 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-proxy-ca-bundles\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917243 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-pull\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917309 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-system-configs\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917393 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-buildworkdir\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917470 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-build-blob-cache\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917600 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-buildcachedir\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.917704 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q4bq\" (UniqueName: \"kubernetes.io/projected/bef17583-2e8b-4675-84b3-1974406477bd-kube-api-access-2q4bq\") pod \"bef17583-2e8b-4675-84b3-1974406477bd\" (UID: \"bef17583-2e8b-4675-84b3-1974406477bd\") " Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.915877 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.919330 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.919885 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.919989 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.920027 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.920227 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.920242 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.920486 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.920512 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.924473 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-push" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-push") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "builder-dockercfg-4d6zg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.924560 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef17583-2e8b-4675-84b3-1974406477bd-kube-api-access-2q4bq" (OuterVolumeSpecName: "kube-api-access-2q4bq") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "kube-api-access-2q4bq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:29:44 crc kubenswrapper[5116]: I1209 14:29:44.932176 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-pull" (OuterVolumeSpecName: "builder-dockercfg-4d6zg-pull") pod "bef17583-2e8b-4675-84b3-1974406477bd" (UID: "bef17583-2e8b-4675-84b3-1974406477bd"). InnerVolumeSpecName "builder-dockercfg-4d6zg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019287 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019316 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019324 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019334 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-pull\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019342 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019350 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019358 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bef17583-2e8b-4675-84b3-1974406477bd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019365 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019374 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2q4bq\" (UniqueName: \"kubernetes.io/projected/bef17583-2e8b-4675-84b3-1974406477bd-kube-api-access-2q4bq\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019381 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bef17583-2e8b-4675-84b3-1974406477bd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019389 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4d6zg-push\" (UniqueName: \"kubernetes.io/secret/bef17583-2e8b-4675-84b3-1974406477bd-builder-dockercfg-4d6zg-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.019396 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bef17583-2e8b-4675-84b3-1974406477bd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.456639 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_bef17583-2e8b-4675-84b3-1974406477bd/git-clone/0.log" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.456691 5116 generic.go:358] "Generic (PLEG): container finished" podID="bef17583-2e8b-4675-84b3-1974406477bd" containerID="171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd" exitCode=1 Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.456789 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.456897 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"bef17583-2e8b-4675-84b3-1974406477bd","Type":"ContainerDied","Data":"171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd"} Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.456975 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"bef17583-2e8b-4675-84b3-1974406477bd","Type":"ContainerDied","Data":"45a5443e098e82d5c6ba4343232c641b2f47958ad365500bb30adf08bd46b9bb"} Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.456999 5116 scope.go:117] "RemoveContainer" containerID="171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.489979 5116 scope.go:117] "RemoveContainer" containerID="171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd" Dec 09 14:29:45 crc kubenswrapper[5116]: E1209 14:29:45.490825 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd\": container with ID starting with 171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd not found: ID does not exist" containerID="171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.492372 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd"} err="failed to get container status \"171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd\": rpc error: code = NotFound desc = could not find container \"171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd\": container with ID starting with 171cf19e7caa230191c7e63e099c8a8c9ae7b5a8b2f8251cd012e38724db2dfd not found: ID does not exist" Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.502076 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.506356 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:29:45 crc kubenswrapper[5116]: I1209 14:29:45.759879 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef17583-2e8b-4675-84b3-1974406477bd" path="/var/lib/kubelet/pods/bef17583-2e8b-4675-84b3-1974406477bd/volumes" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.170809 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp"] Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.172411 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bef17583-2e8b-4675-84b3-1974406477bd" containerName="git-clone" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.172433 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef17583-2e8b-4675-84b3-1974406477bd" containerName="git-clone" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.172629 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="bef17583-2e8b-4675-84b3-1974406477bd" containerName="git-clone" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.181456 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp"] Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.181619 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.184895 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.187673 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.263180 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3f3ea63-f170-48d5-a066-8f03f8db8557-secret-volume\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.263389 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f3ea63-f170-48d5-a066-8f03f8db8557-config-volume\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.263495 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgqr\" (UniqueName: \"kubernetes.io/projected/a3f3ea63-f170-48d5-a066-8f03f8db8557-kube-api-access-llgqr\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.364634 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llgqr\" (UniqueName: \"kubernetes.io/projected/a3f3ea63-f170-48d5-a066-8f03f8db8557-kube-api-access-llgqr\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.364705 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3f3ea63-f170-48d5-a066-8f03f8db8557-secret-volume\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.364772 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f3ea63-f170-48d5-a066-8f03f8db8557-config-volume\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.365718 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f3ea63-f170-48d5-a066-8f03f8db8557-config-volume\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.372599 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3f3ea63-f170-48d5-a066-8f03f8db8557-secret-volume\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.387201 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgqr\" (UniqueName: \"kubernetes.io/projected/a3f3ea63-f170-48d5-a066-8f03f8db8557-kube-api-access-llgqr\") pod \"collect-profiles-29421510-lhvdp\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.511424 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:00 crc kubenswrapper[5116]: I1209 14:30:00.736568 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp"] Dec 09 14:30:01 crc kubenswrapper[5116]: I1209 14:30:01.591186 5116 generic.go:358] "Generic (PLEG): container finished" podID="a3f3ea63-f170-48d5-a066-8f03f8db8557" containerID="db3102260be348cd457b2bbb3935b70dc32c09d53e308ab02afef14886dd1cd0" exitCode=0 Dec 09 14:30:01 crc kubenswrapper[5116]: I1209 14:30:01.591299 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" event={"ID":"a3f3ea63-f170-48d5-a066-8f03f8db8557","Type":"ContainerDied","Data":"db3102260be348cd457b2bbb3935b70dc32c09d53e308ab02afef14886dd1cd0"} Dec 09 14:30:01 crc kubenswrapper[5116]: I1209 14:30:01.591667 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" event={"ID":"a3f3ea63-f170-48d5-a066-8f03f8db8557","Type":"ContainerStarted","Data":"d993b24e5fe96169be3602e3b2739641e70a6742e4dc40d7b6e5a89e73efcbe5"} Dec 09 14:30:02 crc kubenswrapper[5116]: I1209 14:30:02.866214 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.006508 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgqr\" (UniqueName: \"kubernetes.io/projected/a3f3ea63-f170-48d5-a066-8f03f8db8557-kube-api-access-llgqr\") pod \"a3f3ea63-f170-48d5-a066-8f03f8db8557\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.006753 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f3ea63-f170-48d5-a066-8f03f8db8557-config-volume\") pod \"a3f3ea63-f170-48d5-a066-8f03f8db8557\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.006796 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3f3ea63-f170-48d5-a066-8f03f8db8557-secret-volume\") pod \"a3f3ea63-f170-48d5-a066-8f03f8db8557\" (UID: \"a3f3ea63-f170-48d5-a066-8f03f8db8557\") " Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.008449 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f3ea63-f170-48d5-a066-8f03f8db8557-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3f3ea63-f170-48d5-a066-8f03f8db8557" (UID: "a3f3ea63-f170-48d5-a066-8f03f8db8557"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.012290 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f3ea63-f170-48d5-a066-8f03f8db8557-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3f3ea63-f170-48d5-a066-8f03f8db8557" (UID: "a3f3ea63-f170-48d5-a066-8f03f8db8557"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.012433 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f3ea63-f170-48d5-a066-8f03f8db8557-kube-api-access-llgqr" (OuterVolumeSpecName: "kube-api-access-llgqr") pod "a3f3ea63-f170-48d5-a066-8f03f8db8557" (UID: "a3f3ea63-f170-48d5-a066-8f03f8db8557"). InnerVolumeSpecName "kube-api-access-llgqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.108715 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-llgqr\" (UniqueName: \"kubernetes.io/projected/a3f3ea63-f170-48d5-a066-8f03f8db8557-kube-api-access-llgqr\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.108806 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f3ea63-f170-48d5-a066-8f03f8db8557-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.108818 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3f3ea63-f170-48d5-a066-8f03f8db8557-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.611365 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" event={"ID":"a3f3ea63-f170-48d5-a066-8f03f8db8557","Type":"ContainerDied","Data":"d993b24e5fe96169be3602e3b2739641e70a6742e4dc40d7b6e5a89e73efcbe5"} Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.611626 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d993b24e5fe96169be3602e3b2739641e70a6742e4dc40d7b6e5a89e73efcbe5" Dec 09 14:30:03 crc kubenswrapper[5116]: I1209 14:30:03.611759 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-lhvdp" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.885404 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l96x5/must-gather-nf65k"] Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.886732 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3f3ea63-f170-48d5-a066-8f03f8db8557" containerName="collect-profiles" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.886746 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f3ea63-f170-48d5-a066-8f03f8db8557" containerName="collect-profiles" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.886862 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3f3ea63-f170-48d5-a066-8f03f8db8557" containerName="collect-profiles" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.898825 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.900739 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-l96x5\"/\"openshift-service-ca.crt\"" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.901537 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-l96x5\"/\"kube-root-ca.crt\"" Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.903946 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l96x5/must-gather-nf65k"] Dec 09 14:30:32 crc kubenswrapper[5116]: I1209 14:30:32.906013 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-l96x5\"/\"default-dockercfg-wwt8z\"" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.049328 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf26h\" (UniqueName: \"kubernetes.io/projected/ae6181d4-c434-4839-a5f3-7784c983c8c1-kube-api-access-sf26h\") pod \"must-gather-nf65k\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.049421 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae6181d4-c434-4839-a5f3-7784c983c8c1-must-gather-output\") pod \"must-gather-nf65k\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.150889 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf26h\" (UniqueName: \"kubernetes.io/projected/ae6181d4-c434-4839-a5f3-7784c983c8c1-kube-api-access-sf26h\") pod \"must-gather-nf65k\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.150980 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae6181d4-c434-4839-a5f3-7784c983c8c1-must-gather-output\") pod \"must-gather-nf65k\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.151421 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae6181d4-c434-4839-a5f3-7784c983c8c1-must-gather-output\") pod \"must-gather-nf65k\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.169627 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf26h\" (UniqueName: \"kubernetes.io/projected/ae6181d4-c434-4839-a5f3-7784c983c8c1-kube-api-access-sf26h\") pod \"must-gather-nf65k\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.217351 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.473601 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l96x5/must-gather-nf65k"] Dec 09 14:30:33 crc kubenswrapper[5116]: I1209 14:30:33.812368 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l96x5/must-gather-nf65k" event={"ID":"ae6181d4-c434-4839-a5f3-7784c983c8c1","Type":"ContainerStarted","Data":"3b03226165cdbd87069785bb374c26a8bdde418011b660536611173d10e3d11c"} Dec 09 14:30:41 crc kubenswrapper[5116]: I1209 14:30:41.944564 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l96x5/must-gather-nf65k" event={"ID":"ae6181d4-c434-4839-a5f3-7784c983c8c1","Type":"ContainerStarted","Data":"d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22"} Dec 09 14:30:41 crc kubenswrapper[5116]: I1209 14:30:41.945144 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l96x5/must-gather-nf65k" event={"ID":"ae6181d4-c434-4839-a5f3-7784c983c8c1","Type":"ContainerStarted","Data":"6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130"} Dec 09 14:30:41 crc kubenswrapper[5116]: I1209 14:30:41.960153 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l96x5/must-gather-nf65k" podStartSLOduration=2.3371738459999998 podStartE2EDuration="9.960127977s" podCreationTimestamp="2025-12-09 14:30:32 +0000 UTC" firstStartedPulling="2025-12-09 14:30:33.475554843 +0000 UTC m=+971.997299641" lastFinishedPulling="2025-12-09 14:30:41.098508964 +0000 UTC m=+979.620253772" observedRunningTime="2025-12-09 14:30:41.959904981 +0000 UTC m=+980.481649779" watchObservedRunningTime="2025-12-09 14:30:41.960127977 +0000 UTC m=+980.481872815" Dec 09 14:30:42 crc kubenswrapper[5116]: I1209 14:30:42.887643 5116 ???:1] "http: TLS handshake error from 192.168.126.11:34536: no serving certificate available for the kubelet" Dec 09 14:31:16 crc kubenswrapper[5116]: E1209 14:31:16.449468 5116 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.535729 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.548203 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.573516 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47454: no serving certificate available for the kubelet" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.604557 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47464: no serving certificate available for the kubelet" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.638407 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47476: no serving certificate available for the kubelet" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.683128 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47490: no serving certificate available for the kubelet" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.743751 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47502: no serving certificate available for the kubelet" Dec 09 14:31:18 crc kubenswrapper[5116]: I1209 14:31:18.847416 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47510: no serving certificate available for the kubelet" Dec 09 14:31:19 crc kubenswrapper[5116]: I1209 14:31:19.035065 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47524: no serving certificate available for the kubelet" Dec 09 14:31:19 crc kubenswrapper[5116]: I1209 14:31:19.386159 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47534: no serving certificate available for the kubelet" Dec 09 14:31:20 crc kubenswrapper[5116]: I1209 14:31:20.047885 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47536: no serving certificate available for the kubelet" Dec 09 14:31:21 crc kubenswrapper[5116]: I1209 14:31:21.348339 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47546: no serving certificate available for the kubelet" Dec 09 14:31:22 crc kubenswrapper[5116]: I1209 14:31:22.166581 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:31:22 crc kubenswrapper[5116]: I1209 14:31:22.167015 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:31:22 crc kubenswrapper[5116]: I1209 14:31:22.201899 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47562: no serving certificate available for the kubelet" Dec 09 14:31:22 crc kubenswrapper[5116]: I1209 14:31:22.381161 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47576: no serving certificate available for the kubelet" Dec 09 14:31:22 crc kubenswrapper[5116]: I1209 14:31:22.398634 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47586: no serving certificate available for the kubelet" Dec 09 14:31:23 crc kubenswrapper[5116]: I1209 14:31:23.929798 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48606: no serving certificate available for the kubelet" Dec 09 14:31:29 crc kubenswrapper[5116]: I1209 14:31:29.075747 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48614: no serving certificate available for the kubelet" Dec 09 14:31:33 crc kubenswrapper[5116]: I1209 14:31:33.710465 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38844: no serving certificate available for the kubelet" Dec 09 14:31:33 crc kubenswrapper[5116]: I1209 14:31:33.852077 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38850: no serving certificate available for the kubelet" Dec 09 14:31:33 crc kubenswrapper[5116]: I1209 14:31:33.931237 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38854: no serving certificate available for the kubelet" Dec 09 14:31:39 crc kubenswrapper[5116]: I1209 14:31:39.346802 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38856: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.106218 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53740: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.226508 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53744: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.251325 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53746: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.257660 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53748: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.396039 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53762: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.396494 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53774: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.435208 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53776: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.533025 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53784: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.692736 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53798: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.726829 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53812: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.731059 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53828: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.859937 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53842: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.872573 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53856: no serving certificate available for the kubelet" Dec 09 14:31:49 crc kubenswrapper[5116]: I1209 14:31:49.923266 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53866: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.020993 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53872: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.176473 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53880: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.199355 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53892: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.213182 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53908: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.344022 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53916: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.386122 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53926: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.391323 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53936: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.528995 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53944: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.681590 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53946: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.710835 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53952: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.742048 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53958: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.851361 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53966: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.857165 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53978: no serving certificate available for the kubelet" Dec 09 14:31:50 crc kubenswrapper[5116]: I1209 14:31:50.901297 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53982: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.009660 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53986: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.161037 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53988: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.189226 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53990: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.193253 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53994: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.332369 5116 ???:1] "http: TLS handshake error from 192.168.126.11:53998: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.335543 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54010: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.395589 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54012: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.504545 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54018: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.657484 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54024: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.659423 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54032: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.659974 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54038: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.842606 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54046: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.856189 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54058: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.871567 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54066: no serving certificate available for the kubelet" Dec 09 14:31:51 crc kubenswrapper[5116]: I1209 14:31:51.890533 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54068: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.021046 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54074: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.161787 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54084: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.167407 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.167493 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.175921 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54086: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.191808 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54102: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.325734 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54118: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.326305 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54120: no serving certificate available for the kubelet" Dec 09 14:31:52 crc kubenswrapper[5116]: I1209 14:31:52.368047 5116 ???:1] "http: TLS handshake error from 192.168.126.11:54130: no serving certificate available for the kubelet" Dec 09 14:31:59 crc kubenswrapper[5116]: I1209 14:31:59.859056 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38720: no serving certificate available for the kubelet" Dec 09 14:32:03 crc kubenswrapper[5116]: I1209 14:32:03.813298 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38806: no serving certificate available for the kubelet" Dec 09 14:32:03 crc kubenswrapper[5116]: I1209 14:32:03.966913 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38808: no serving certificate available for the kubelet" Dec 09 14:32:04 crc kubenswrapper[5116]: I1209 14:32:04.036163 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38812: no serving certificate available for the kubelet" Dec 09 14:32:04 crc kubenswrapper[5116]: I1209 14:32:04.116550 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38820: no serving certificate available for the kubelet" Dec 09 14:32:04 crc kubenswrapper[5116]: I1209 14:32:04.150040 5116 ???:1] "http: TLS handshake error from 192.168.126.11:38826: no serving certificate available for the kubelet" Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.166753 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.167217 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.167258 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.167781 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c49f65dc7d0b501013e6e6317fab8447f902413dcd36308e941e8ac6d3a050d5"} pod="openshift-machine-config-operator/machine-config-daemon-phdhk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.167835 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" containerID="cri-o://c49f65dc7d0b501013e6e6317fab8447f902413dcd36308e941e8ac6d3a050d5" gracePeriod=600 Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.295449 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.387088 5116 generic.go:358] "Generic (PLEG): container finished" podID="140ab739-f0e3-4429-8e23-03782755777d" containerID="c49f65dc7d0b501013e6e6317fab8447f902413dcd36308e941e8ac6d3a050d5" exitCode=0 Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.387123 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerDied","Data":"c49f65dc7d0b501013e6e6317fab8447f902413dcd36308e941e8ac6d3a050d5"} Dec 09 14:32:22 crc kubenswrapper[5116]: I1209 14:32:22.387209 5116 scope.go:117] "RemoveContainer" containerID="7315817fcea3499a7635fc1289860612fc7524332d75eb55b4b0ddd1ffdb8798" Dec 09 14:32:23 crc kubenswrapper[5116]: I1209 14:32:23.397813 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" event={"ID":"140ab739-f0e3-4429-8e23-03782755777d","Type":"ContainerStarted","Data":"e211a38cd8e94b37dc25935f63fd6c2cf59ae9f37c68b78d0082ff5dcbdec026"} Dec 09 14:32:40 crc kubenswrapper[5116]: I1209 14:32:40.520746 5116 generic.go:358] "Generic (PLEG): container finished" podID="ae6181d4-c434-4839-a5f3-7784c983c8c1" containerID="6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130" exitCode=0 Dec 09 14:32:40 crc kubenswrapper[5116]: I1209 14:32:40.520785 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l96x5/must-gather-nf65k" event={"ID":"ae6181d4-c434-4839-a5f3-7784c983c8c1","Type":"ContainerDied","Data":"6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130"} Dec 09 14:32:40 crc kubenswrapper[5116]: I1209 14:32:40.521976 5116 scope.go:117] "RemoveContainer" containerID="6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130" Dec 09 14:32:40 crc kubenswrapper[5116]: I1209 14:32:40.856056 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45348: no serving certificate available for the kubelet" Dec 09 14:32:42 crc kubenswrapper[5116]: I1209 14:32:42.906677 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45356: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.068154 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45364: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.087357 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45370: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.114281 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45382: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.127155 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45386: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.144022 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45398: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.159943 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45402: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.174858 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45406: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.189336 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45422: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.339124 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45426: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.353930 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45428: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.381276 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45434: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.393295 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45440: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.408339 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45446: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.420705 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44222: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.436007 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44234: no serving certificate available for the kubelet" Dec 09 14:32:43 crc kubenswrapper[5116]: I1209 14:32:43.450115 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44240: no serving certificate available for the kubelet" Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.497407 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l96x5/must-gather-nf65k"] Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.498848 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-l96x5/must-gather-nf65k" podUID="ae6181d4-c434-4839-a5f3-7784c983c8c1" containerName="copy" containerID="cri-o://d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22" gracePeriod=2 Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.502495 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l96x5/must-gather-nf65k"] Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.503005 5116 status_manager.go:895] "Failed to get status for pod" podUID="ae6181d4-c434-4839-a5f3-7784c983c8c1" pod="openshift-must-gather-l96x5/must-gather-nf65k" err="pods \"must-gather-nf65k\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-l96x5\": no relationship found between node 'crc' and this object" Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.869382 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l96x5_must-gather-nf65k_ae6181d4-c434-4839-a5f3-7784c983c8c1/copy/0.log" Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.869801 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.871031 5116 status_manager.go:895] "Failed to get status for pod" podUID="ae6181d4-c434-4839-a5f3-7784c983c8c1" pod="openshift-must-gather-l96x5/must-gather-nf65k" err="pods \"must-gather-nf65k\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-l96x5\": no relationship found between node 'crc' and this object" Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.971423 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae6181d4-c434-4839-a5f3-7784c983c8c1-must-gather-output\") pod \"ae6181d4-c434-4839-a5f3-7784c983c8c1\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.971534 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf26h\" (UniqueName: \"kubernetes.io/projected/ae6181d4-c434-4839-a5f3-7784c983c8c1-kube-api-access-sf26h\") pod \"ae6181d4-c434-4839-a5f3-7784c983c8c1\" (UID: \"ae6181d4-c434-4839-a5f3-7784c983c8c1\") " Dec 09 14:32:48 crc kubenswrapper[5116]: I1209 14:32:48.982249 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6181d4-c434-4839-a5f3-7784c983c8c1-kube-api-access-sf26h" (OuterVolumeSpecName: "kube-api-access-sf26h") pod "ae6181d4-c434-4839-a5f3-7784c983c8c1" (UID: "ae6181d4-c434-4839-a5f3-7784c983c8c1"). InnerVolumeSpecName "kube-api-access-sf26h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.024966 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6181d4-c434-4839-a5f3-7784c983c8c1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ae6181d4-c434-4839-a5f3-7784c983c8c1" (UID: "ae6181d4-c434-4839-a5f3-7784c983c8c1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.073536 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sf26h\" (UniqueName: \"kubernetes.io/projected/ae6181d4-c434-4839-a5f3-7784c983c8c1-kube-api-access-sf26h\") on node \"crc\" DevicePath \"\"" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.073570 5116 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae6181d4-c434-4839-a5f3-7784c983c8c1-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.595161 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l96x5_must-gather-nf65k_ae6181d4-c434-4839-a5f3-7784c983c8c1/copy/0.log" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.595850 5116 generic.go:358] "Generic (PLEG): container finished" podID="ae6181d4-c434-4839-a5f3-7784c983c8c1" containerID="d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22" exitCode=143 Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.595917 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l96x5/must-gather-nf65k" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.595932 5116 scope.go:117] "RemoveContainer" containerID="d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.598916 5116 status_manager.go:895] "Failed to get status for pod" podUID="ae6181d4-c434-4839-a5f3-7784c983c8c1" pod="openshift-must-gather-l96x5/must-gather-nf65k" err="pods \"must-gather-nf65k\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-l96x5\": no relationship found between node 'crc' and this object" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.613079 5116 status_manager.go:895] "Failed to get status for pod" podUID="ae6181d4-c434-4839-a5f3-7784c983c8c1" pod="openshift-must-gather-l96x5/must-gather-nf65k" err="pods \"must-gather-nf65k\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-l96x5\": no relationship found between node 'crc' and this object" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.617700 5116 scope.go:117] "RemoveContainer" containerID="6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.681352 5116 scope.go:117] "RemoveContainer" containerID="d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22" Dec 09 14:32:49 crc kubenswrapper[5116]: E1209 14:32:49.681767 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22\": container with ID starting with d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22 not found: ID does not exist" containerID="d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.681801 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22"} err="failed to get container status \"d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22\": rpc error: code = NotFound desc = could not find container \"d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22\": container with ID starting with d997648a2a32358c74eebe6b2577db640ec8a79dc58a46a75335522166d59d22 not found: ID does not exist" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.681818 5116 scope.go:117] "RemoveContainer" containerID="6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130" Dec 09 14:32:49 crc kubenswrapper[5116]: E1209 14:32:49.682193 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130\": container with ID starting with 6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130 not found: ID does not exist" containerID="6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.682214 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130"} err="failed to get container status \"6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130\": rpc error: code = NotFound desc = could not find container \"6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130\": container with ID starting with 6793690fa8747ab9927cb8630a0403b63485284caea449db37da4d1b1619e130 not found: ID does not exist" Dec 09 14:32:49 crc kubenswrapper[5116]: I1209 14:32:49.758998 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6181d4-c434-4839-a5f3-7784c983c8c1" path="/var/lib/kubelet/pods/ae6181d4-c434-4839-a5f3-7784c983c8c1/volumes" Dec 09 14:34:02 crc kubenswrapper[5116]: I1209 14:34:02.803894 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35072: no serving certificate available for the kubelet" Dec 09 14:34:22 crc kubenswrapper[5116]: I1209 14:34:22.166551 5116 patch_prober.go:28] interesting pod/machine-config-daemon-phdhk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:34:22 crc kubenswrapper[5116]: I1209 14:34:22.168487 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-phdhk" podUID="140ab739-f0e3-4429-8e23-03782755777d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:34:22 crc kubenswrapper[5116]: I1209 14:34:22.183739 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-554lf_2a441b53-f957-4f01-a123-a96c637c3fe2/kube-multus/0.log" Dec 09 14:34:22 crc kubenswrapper[5116]: I1209 14:34:22.184622 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-554lf_2a441b53-f957-4f01-a123-a96c637c3fe2/kube-multus/0.log" Dec 09 14:34:22 crc kubenswrapper[5116]: I1209 14:34:22.194166 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:34:22 crc kubenswrapper[5116]: I1209 14:34:22.195431 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515116031621024441 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015116031622017357 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015116026653016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015116026654015463 5ustar corecore